Test Report: KVM_Linux_crio 20451

                    
                      3de5109224746595ef816ce07f095d1725de7bd9:2025-02-24:38483
                    
                

Test fail (10/321)

x
+
TestAddons/parallel/Ingress (155.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-641952 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-641952 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-641952 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [3cd89051-3c4b-48a8-a918-4fe1e668d737] Pending
helpers_test.go:344: "nginx" [3cd89051-3c4b-48a8-a918-4fe1e668d737] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [3cd89051-3c4b-48a8-a918-4fe1e668d737] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004948963s
I0224 12:05:05.088866  894564 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-641952 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-641952 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.011160461s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-641952 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-641952 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.150
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-641952 -n addons-641952
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-641952 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-641952 logs -n 25: (1.48427665s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-290273                                                                     | download-only-290273 | jenkins | v1.35.0 | 24 Feb 25 12:00 UTC | 24 Feb 25 12:00 UTC |
	| delete  | -p download-only-675121                                                                     | download-only-675121 | jenkins | v1.35.0 | 24 Feb 25 12:00 UTC | 24 Feb 25 12:00 UTC |
	| delete  | -p download-only-290273                                                                     | download-only-290273 | jenkins | v1.35.0 | 24 Feb 25 12:00 UTC | 24 Feb 25 12:00 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-786462 | jenkins | v1.35.0 | 24 Feb 25 12:00 UTC |                     |
	|         | binary-mirror-786462                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:35645                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-786462                                                                     | binary-mirror-786462 | jenkins | v1.35.0 | 24 Feb 25 12:00 UTC | 24 Feb 25 12:00 UTC |
	| addons  | disable dashboard -p                                                                        | addons-641952        | jenkins | v1.35.0 | 24 Feb 25 12:00 UTC |                     |
	|         | addons-641952                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-641952        | jenkins | v1.35.0 | 24 Feb 25 12:00 UTC |                     |
	|         | addons-641952                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-641952 --wait=true                                                                | addons-641952        | jenkins | v1.35.0 | 24 Feb 25 12:00 UTC | 24 Feb 25 12:04 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-641952 addons disable                                                                | addons-641952        | jenkins | v1.35.0 | 24 Feb 25 12:04 UTC | 24 Feb 25 12:04 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-641952 addons disable                                                                | addons-641952        | jenkins | v1.35.0 | 24 Feb 25 12:04 UTC | 24 Feb 25 12:04 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-641952 addons                                                                        | addons-641952        | jenkins | v1.35.0 | 24 Feb 25 12:04 UTC | 24 Feb 25 12:04 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-641952 addons disable                                                                | addons-641952        | jenkins | v1.35.0 | 24 Feb 25 12:04 UTC | 24 Feb 25 12:04 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-641952 addons                                                                        | addons-641952        | jenkins | v1.35.0 | 24 Feb 25 12:04 UTC | 24 Feb 25 12:04 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-641952 ip                                                                            | addons-641952        | jenkins | v1.35.0 | 24 Feb 25 12:04 UTC | 24 Feb 25 12:04 UTC |
	| addons  | addons-641952 addons disable                                                                | addons-641952        | jenkins | v1.35.0 | 24 Feb 25 12:04 UTC | 24 Feb 25 12:04 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-641952 addons                                                                        | addons-641952        | jenkins | v1.35.0 | 24 Feb 25 12:04 UTC | 24 Feb 25 12:04 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-641952 addons                                                                        | addons-641952        | jenkins | v1.35.0 | 24 Feb 25 12:04 UTC | 24 Feb 25 12:04 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-641952        | jenkins | v1.35.0 | 24 Feb 25 12:04 UTC | 24 Feb 25 12:04 UTC |
	|         | -p addons-641952                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-641952 ssh curl -s                                                                   | addons-641952        | jenkins | v1.35.0 | 24 Feb 25 12:05 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ssh     | addons-641952 ssh cat                                                                       | addons-641952        | jenkins | v1.35.0 | 24 Feb 25 12:07 UTC | 24 Feb 25 12:07 UTC |
	|         | /opt/local-path-provisioner/pvc-cd8fcaca-bd54-49a6-9e22-383da91e5d0a_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-641952 addons disable                                                                | addons-641952        | jenkins | v1.35.0 | 24 Feb 25 12:07 UTC | 24 Feb 25 12:07 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-641952 addons disable                                                                | addons-641952        | jenkins | v1.35.0 | 24 Feb 25 12:07 UTC | 24 Feb 25 12:07 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-641952 addons                                                                        | addons-641952        | jenkins | v1.35.0 | 24 Feb 25 12:07 UTC | 24 Feb 25 12:07 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-641952 addons                                                                        | addons-641952        | jenkins | v1.35.0 | 24 Feb 25 12:07 UTC |                     |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-641952 ip                                                                            | addons-641952        | jenkins | v1.35.0 | 24 Feb 25 12:07 UTC | 24 Feb 25 12:07 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/24 12:00:46
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0224 12:00:46.097771  895269 out.go:345] Setting OutFile to fd 1 ...
	I0224 12:00:46.098054  895269 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 12:00:46.098066  895269 out.go:358] Setting ErrFile to fd 2...
	I0224 12:00:46.098070  895269 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 12:00:46.098268  895269 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-887294/.minikube/bin
	I0224 12:00:46.098977  895269 out.go:352] Setting JSON to false
	I0224 12:00:46.100090  895269 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":6187,"bootTime":1740392259,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0224 12:00:46.100204  895269 start.go:139] virtualization: kvm guest
	I0224 12:00:46.102581  895269 out.go:177] * [addons-641952] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0224 12:00:46.104184  895269 notify.go:220] Checking for updates...
	I0224 12:00:46.104272  895269 out.go:177]   - MINIKUBE_LOCATION=20451
	I0224 12:00:46.105904  895269 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 12:00:46.107437  895269 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20451-887294/kubeconfig
	I0224 12:00:46.108879  895269 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-887294/.minikube
	I0224 12:00:46.110134  895269 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0224 12:00:46.111610  895269 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0224 12:00:46.113077  895269 driver.go:394] Setting default libvirt URI to qemu:///system
	I0224 12:00:46.147099  895269 out.go:177] * Using the kvm2 driver based on user configuration
	I0224 12:00:46.148653  895269 start.go:297] selected driver: kvm2
	I0224 12:00:46.148679  895269 start.go:901] validating driver "kvm2" against <nil>
	I0224 12:00:46.148693  895269 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0224 12:00:46.149485  895269 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 12:00:46.149587  895269 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20451-887294/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0224 12:00:46.165785  895269 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0224 12:00:46.165851  895269 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0224 12:00:46.166113  895269 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0224 12:00:46.166155  895269 cni.go:84] Creating CNI manager for ""
	I0224 12:00:46.166202  895269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0224 12:00:46.166211  895269 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0224 12:00:46.166266  895269 start.go:340] cluster config:
	{Name:addons-641952 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-641952 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0224 12:00:46.166369  895269 iso.go:125] acquiring lock: {Name:mk57408cca66a96a13d93cda43cdfac6e61aef3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 12:00:46.168404  895269 out.go:177] * Starting "addons-641952" primary control-plane node in "addons-641952" cluster
	I0224 12:00:46.169988  895269 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0224 12:00:46.170052  895269 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20451-887294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0224 12:00:46.170070  895269 cache.go:56] Caching tarball of preloaded images
	I0224 12:00:46.170248  895269 preload.go:172] Found /home/jenkins/minikube-integration/20451-887294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0224 12:00:46.170263  895269 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0224 12:00:46.170608  895269 profile.go:143] Saving config to /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/config.json ...
	I0224 12:00:46.170640  895269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/config.json: {Name:mk4a6caef6df4a595f95e1f0f7c8125b6a478b8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 12:00:46.170815  895269 start.go:360] acquireMachinesLock for addons-641952: {Name:mk023761b01bb629a1acd40bc8104cc517b0e15b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0224 12:00:46.170885  895269 start.go:364] duration metric: took 51.279µs to acquireMachinesLock for "addons-641952"
	I0224 12:00:46.170912  895269 start.go:93] Provisioning new machine with config: &{Name:addons-641952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:addons-641952 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0224 12:00:46.170986  895269 start.go:125] createHost starting for "" (driver="kvm2")
	I0224 12:00:46.172757  895269 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0224 12:00:46.172952  895269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:00:46.173018  895269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:00:46.189150  895269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38947
	I0224 12:00:46.189802  895269 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:00:46.190458  895269 main.go:141] libmachine: Using API Version  1
	I0224 12:00:46.190481  895269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:00:46.190899  895269 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:00:46.191123  895269 main.go:141] libmachine: (addons-641952) Calling .GetMachineName
	I0224 12:00:46.191304  895269 main.go:141] libmachine: (addons-641952) Calling .DriverName
	I0224 12:00:46.191492  895269 start.go:159] libmachine.API.Create for "addons-641952" (driver="kvm2")
	I0224 12:00:46.191530  895269 client.go:168] LocalClient.Create starting
	I0224 12:00:46.191583  895269 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem
	I0224 12:00:46.649558  895269 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/cert.pem
	I0224 12:00:46.736299  895269 main.go:141] libmachine: Running pre-create checks...
	I0224 12:00:46.736326  895269 main.go:141] libmachine: (addons-641952) Calling .PreCreateCheck
	I0224 12:00:46.736902  895269 main.go:141] libmachine: (addons-641952) Calling .GetConfigRaw
	I0224 12:00:46.737447  895269 main.go:141] libmachine: Creating machine...
	I0224 12:00:46.737464  895269 main.go:141] libmachine: (addons-641952) Calling .Create
	I0224 12:00:46.737651  895269 main.go:141] libmachine: (addons-641952) creating KVM machine...
	I0224 12:00:46.737672  895269 main.go:141] libmachine: (addons-641952) creating network...
	I0224 12:00:46.738902  895269 main.go:141] libmachine: (addons-641952) DBG | found existing default KVM network
	I0224 12:00:46.739662  895269 main.go:141] libmachine: (addons-641952) DBG | I0224 12:00:46.739515  895291 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001231f0}
	I0224 12:00:46.739741  895269 main.go:141] libmachine: (addons-641952) DBG | created network xml: 
	I0224 12:00:46.739772  895269 main.go:141] libmachine: (addons-641952) DBG | <network>
	I0224 12:00:46.739783  895269 main.go:141] libmachine: (addons-641952) DBG |   <name>mk-addons-641952</name>
	I0224 12:00:46.739797  895269 main.go:141] libmachine: (addons-641952) DBG |   <dns enable='no'/>
	I0224 12:00:46.739804  895269 main.go:141] libmachine: (addons-641952) DBG |   
	I0224 12:00:46.739813  895269 main.go:141] libmachine: (addons-641952) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0224 12:00:46.739826  895269 main.go:141] libmachine: (addons-641952) DBG |     <dhcp>
	I0224 12:00:46.739838  895269 main.go:141] libmachine: (addons-641952) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0224 12:00:46.739851  895269 main.go:141] libmachine: (addons-641952) DBG |     </dhcp>
	I0224 12:00:46.739861  895269 main.go:141] libmachine: (addons-641952) DBG |   </ip>
	I0224 12:00:46.739872  895269 main.go:141] libmachine: (addons-641952) DBG |   
	I0224 12:00:46.739881  895269 main.go:141] libmachine: (addons-641952) DBG | </network>
	I0224 12:00:46.739915  895269 main.go:141] libmachine: (addons-641952) DBG | 
	I0224 12:00:46.745698  895269 main.go:141] libmachine: (addons-641952) DBG | trying to create private KVM network mk-addons-641952 192.168.39.0/24...
	I0224 12:00:46.814605  895269 main.go:141] libmachine: (addons-641952) DBG | private KVM network mk-addons-641952 192.168.39.0/24 created
	I0224 12:00:46.814641  895269 main.go:141] libmachine: (addons-641952) setting up store path in /home/jenkins/minikube-integration/20451-887294/.minikube/machines/addons-641952 ...
	I0224 12:00:46.814649  895269 main.go:141] libmachine: (addons-641952) building disk image from file:///home/jenkins/minikube-integration/20451-887294/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0224 12:00:46.814661  895269 main.go:141] libmachine: (addons-641952) DBG | I0224 12:00:46.814567  895291 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20451-887294/.minikube
	I0224 12:00:46.814729  895269 main.go:141] libmachine: (addons-641952) Downloading /home/jenkins/minikube-integration/20451-887294/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20451-887294/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0224 12:00:47.115048  895269 main.go:141] libmachine: (addons-641952) DBG | I0224 12:00:47.114860  895291 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20451-887294/.minikube/machines/addons-641952/id_rsa...
	I0224 12:00:47.168078  895269 main.go:141] libmachine: (addons-641952) DBG | I0224 12:00:47.167900  895291 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20451-887294/.minikube/machines/addons-641952/addons-641952.rawdisk...
	I0224 12:00:47.168110  895269 main.go:141] libmachine: (addons-641952) DBG | Writing magic tar header
	I0224 12:00:47.168121  895269 main.go:141] libmachine: (addons-641952) DBG | Writing SSH key tar header
	I0224 12:00:47.168128  895269 main.go:141] libmachine: (addons-641952) DBG | I0224 12:00:47.168030  895291 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20451-887294/.minikube/machines/addons-641952 ...
	I0224 12:00:47.168141  895269 main.go:141] libmachine: (addons-641952) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20451-887294/.minikube/machines/addons-641952
	I0224 12:00:47.168222  895269 main.go:141] libmachine: (addons-641952) setting executable bit set on /home/jenkins/minikube-integration/20451-887294/.minikube/machines/addons-641952 (perms=drwx------)
	I0224 12:00:47.168242  895269 main.go:141] libmachine: (addons-641952) setting executable bit set on /home/jenkins/minikube-integration/20451-887294/.minikube/machines (perms=drwxr-xr-x)
	I0224 12:00:47.168254  895269 main.go:141] libmachine: (addons-641952) setting executable bit set on /home/jenkins/minikube-integration/20451-887294/.minikube (perms=drwxr-xr-x)
	I0224 12:00:47.168260  895269 main.go:141] libmachine: (addons-641952) setting executable bit set on /home/jenkins/minikube-integration/20451-887294 (perms=drwxrwxr-x)
	I0224 12:00:47.168266  895269 main.go:141] libmachine: (addons-641952) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20451-887294/.minikube/machines
	I0224 12:00:47.168273  895269 main.go:141] libmachine: (addons-641952) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0224 12:00:47.168282  895269 main.go:141] libmachine: (addons-641952) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0224 12:00:47.168287  895269 main.go:141] libmachine: (addons-641952) creating domain...
	I0224 12:00:47.168296  895269 main.go:141] libmachine: (addons-641952) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20451-887294/.minikube
	I0224 12:00:47.168304  895269 main.go:141] libmachine: (addons-641952) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20451-887294
	I0224 12:00:47.168311  895269 main.go:141] libmachine: (addons-641952) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0224 12:00:47.168318  895269 main.go:141] libmachine: (addons-641952) DBG | checking permissions on dir: /home/jenkins
	I0224 12:00:47.168324  895269 main.go:141] libmachine: (addons-641952) DBG | checking permissions on dir: /home
	I0224 12:00:47.168331  895269 main.go:141] libmachine: (addons-641952) DBG | skipping /home - not owner
	I0224 12:00:47.169293  895269 main.go:141] libmachine: (addons-641952) define libvirt domain using xml: 
	I0224 12:00:47.169345  895269 main.go:141] libmachine: (addons-641952) <domain type='kvm'>
	I0224 12:00:47.169387  895269 main.go:141] libmachine: (addons-641952)   <name>addons-641952</name>
	I0224 12:00:47.169415  895269 main.go:141] libmachine: (addons-641952)   <memory unit='MiB'>4000</memory>
	I0224 12:00:47.169446  895269 main.go:141] libmachine: (addons-641952)   <vcpu>2</vcpu>
	I0224 12:00:47.169465  895269 main.go:141] libmachine: (addons-641952)   <features>
	I0224 12:00:47.169477  895269 main.go:141] libmachine: (addons-641952)     <acpi/>
	I0224 12:00:47.169487  895269 main.go:141] libmachine: (addons-641952)     <apic/>
	I0224 12:00:47.169498  895269 main.go:141] libmachine: (addons-641952)     <pae/>
	I0224 12:00:47.169507  895269 main.go:141] libmachine: (addons-641952)     
	I0224 12:00:47.169519  895269 main.go:141] libmachine: (addons-641952)   </features>
	I0224 12:00:47.169530  895269 main.go:141] libmachine: (addons-641952)   <cpu mode='host-passthrough'>
	I0224 12:00:47.169544  895269 main.go:141] libmachine: (addons-641952)   
	I0224 12:00:47.169573  895269 main.go:141] libmachine: (addons-641952)   </cpu>
	I0224 12:00:47.169582  895269 main.go:141] libmachine: (addons-641952)   <os>
	I0224 12:00:47.169588  895269 main.go:141] libmachine: (addons-641952)     <type>hvm</type>
	I0224 12:00:47.169595  895269 main.go:141] libmachine: (addons-641952)     <boot dev='cdrom'/>
	I0224 12:00:47.169599  895269 main.go:141] libmachine: (addons-641952)     <boot dev='hd'/>
	I0224 12:00:47.169605  895269 main.go:141] libmachine: (addons-641952)     <bootmenu enable='no'/>
	I0224 12:00:47.169617  895269 main.go:141] libmachine: (addons-641952)   </os>
	I0224 12:00:47.169652  895269 main.go:141] libmachine: (addons-641952)   <devices>
	I0224 12:00:47.169677  895269 main.go:141] libmachine: (addons-641952)     <disk type='file' device='cdrom'>
	I0224 12:00:47.169703  895269 main.go:141] libmachine: (addons-641952)       <source file='/home/jenkins/minikube-integration/20451-887294/.minikube/machines/addons-641952/boot2docker.iso'/>
	I0224 12:00:47.169715  895269 main.go:141] libmachine: (addons-641952)       <target dev='hdc' bus='scsi'/>
	I0224 12:00:47.169726  895269 main.go:141] libmachine: (addons-641952)       <readonly/>
	I0224 12:00:47.169737  895269 main.go:141] libmachine: (addons-641952)     </disk>
	I0224 12:00:47.169749  895269 main.go:141] libmachine: (addons-641952)     <disk type='file' device='disk'>
	I0224 12:00:47.169771  895269 main.go:141] libmachine: (addons-641952)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0224 12:00:47.169789  895269 main.go:141] libmachine: (addons-641952)       <source file='/home/jenkins/minikube-integration/20451-887294/.minikube/machines/addons-641952/addons-641952.rawdisk'/>
	I0224 12:00:47.169802  895269 main.go:141] libmachine: (addons-641952)       <target dev='hda' bus='virtio'/>
	I0224 12:00:47.169817  895269 main.go:141] libmachine: (addons-641952)     </disk>
	I0224 12:00:47.169829  895269 main.go:141] libmachine: (addons-641952)     <interface type='network'>
	I0224 12:00:47.169841  895269 main.go:141] libmachine: (addons-641952)       <source network='mk-addons-641952'/>
	I0224 12:00:47.169861  895269 main.go:141] libmachine: (addons-641952)       <model type='virtio'/>
	I0224 12:00:47.169877  895269 main.go:141] libmachine: (addons-641952)     </interface>
	I0224 12:00:47.169889  895269 main.go:141] libmachine: (addons-641952)     <interface type='network'>
	I0224 12:00:47.169901  895269 main.go:141] libmachine: (addons-641952)       <source network='default'/>
	I0224 12:00:47.169917  895269 main.go:141] libmachine: (addons-641952)       <model type='virtio'/>
	I0224 12:00:47.169927  895269 main.go:141] libmachine: (addons-641952)     </interface>
	I0224 12:00:47.169941  895269 main.go:141] libmachine: (addons-641952)     <serial type='pty'>
	I0224 12:00:47.169956  895269 main.go:141] libmachine: (addons-641952)       <target port='0'/>
	I0224 12:00:47.169967  895269 main.go:141] libmachine: (addons-641952)     </serial>
	I0224 12:00:47.169975  895269 main.go:141] libmachine: (addons-641952)     <console type='pty'>
	I0224 12:00:47.169988  895269 main.go:141] libmachine: (addons-641952)       <target type='serial' port='0'/>
	I0224 12:00:47.169998  895269 main.go:141] libmachine: (addons-641952)     </console>
	I0224 12:00:47.170009  895269 main.go:141] libmachine: (addons-641952)     <rng model='virtio'>
	I0224 12:00:47.170021  895269 main.go:141] libmachine: (addons-641952)       <backend model='random'>/dev/random</backend>
	I0224 12:00:47.170032  895269 main.go:141] libmachine: (addons-641952)     </rng>
	I0224 12:00:47.170041  895269 main.go:141] libmachine: (addons-641952)     
	I0224 12:00:47.170067  895269 main.go:141] libmachine: (addons-641952)     
	I0224 12:00:47.170084  895269 main.go:141] libmachine: (addons-641952)   </devices>
	I0224 12:00:47.170091  895269 main.go:141] libmachine: (addons-641952) </domain>
	I0224 12:00:47.170097  895269 main.go:141] libmachine: (addons-641952) 
	I0224 12:00:47.176357  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:f7:a0:8d in network default
	I0224 12:00:47.176887  895269 main.go:141] libmachine: (addons-641952) starting domain...
	I0224 12:00:47.176918  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:00:47.176927  895269 main.go:141] libmachine: (addons-641952) ensuring networks are active...
	I0224 12:00:47.177653  895269 main.go:141] libmachine: (addons-641952) Ensuring network default is active
	I0224 12:00:47.177991  895269 main.go:141] libmachine: (addons-641952) Ensuring network mk-addons-641952 is active
	I0224 12:00:47.178466  895269 main.go:141] libmachine: (addons-641952) getting domain XML...
	I0224 12:00:47.179139  895269 main.go:141] libmachine: (addons-641952) creating domain...
	I0224 12:00:48.635750  895269 main.go:141] libmachine: (addons-641952) waiting for IP...
	I0224 12:00:48.636599  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:00:48.636992  895269 main.go:141] libmachine: (addons-641952) DBG | unable to find current IP address of domain addons-641952 in network mk-addons-641952
	I0224 12:00:48.637054  895269 main.go:141] libmachine: (addons-641952) DBG | I0224 12:00:48.636993  895291 retry.go:31] will retry after 276.977172ms: waiting for domain to come up
	I0224 12:00:48.915551  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:00:48.916050  895269 main.go:141] libmachine: (addons-641952) DBG | unable to find current IP address of domain addons-641952 in network mk-addons-641952
	I0224 12:00:48.916081  895269 main.go:141] libmachine: (addons-641952) DBG | I0224 12:00:48.916026  895291 retry.go:31] will retry after 370.976918ms: waiting for domain to come up
	I0224 12:00:49.289168  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:00:49.289679  895269 main.go:141] libmachine: (addons-641952) DBG | unable to find current IP address of domain addons-641952 in network mk-addons-641952
	I0224 12:00:49.289710  895269 main.go:141] libmachine: (addons-641952) DBG | I0224 12:00:49.289617  895291 retry.go:31] will retry after 462.302754ms: waiting for domain to come up
	I0224 12:00:49.753365  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:00:49.753812  895269 main.go:141] libmachine: (addons-641952) DBG | unable to find current IP address of domain addons-641952 in network mk-addons-641952
	I0224 12:00:49.753845  895269 main.go:141] libmachine: (addons-641952) DBG | I0224 12:00:49.753775  895291 retry.go:31] will retry after 464.284368ms: waiting for domain to come up
	I0224 12:00:50.219534  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:00:50.219998  895269 main.go:141] libmachine: (addons-641952) DBG | unable to find current IP address of domain addons-641952 in network mk-addons-641952
	I0224 12:00:50.220022  895269 main.go:141] libmachine: (addons-641952) DBG | I0224 12:00:50.219969  895291 retry.go:31] will retry after 478.218711ms: waiting for domain to come up
	I0224 12:00:50.699517  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:00:50.700019  895269 main.go:141] libmachine: (addons-641952) DBG | unable to find current IP address of domain addons-641952 in network mk-addons-641952
	I0224 12:00:50.700052  895269 main.go:141] libmachine: (addons-641952) DBG | I0224 12:00:50.699984  895291 retry.go:31] will retry after 770.448486ms: waiting for domain to come up
	I0224 12:00:51.471679  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:00:51.472218  895269 main.go:141] libmachine: (addons-641952) DBG | unable to find current IP address of domain addons-641952 in network mk-addons-641952
	I0224 12:00:51.472278  895269 main.go:141] libmachine: (addons-641952) DBG | I0224 12:00:51.472204  895291 retry.go:31] will retry after 999.860706ms: waiting for domain to come up
	I0224 12:00:52.473435  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:00:52.473846  895269 main.go:141] libmachine: (addons-641952) DBG | unable to find current IP address of domain addons-641952 in network mk-addons-641952
	I0224 12:00:52.473900  895269 main.go:141] libmachine: (addons-641952) DBG | I0224 12:00:52.473807  895291 retry.go:31] will retry after 1.186206514s: waiting for domain to come up
	I0224 12:00:53.662231  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:00:53.662648  895269 main.go:141] libmachine: (addons-641952) DBG | unable to find current IP address of domain addons-641952 in network mk-addons-641952
	I0224 12:00:53.662697  895269 main.go:141] libmachine: (addons-641952) DBG | I0224 12:00:53.662606  895291 retry.go:31] will retry after 1.238813383s: waiting for domain to come up
	I0224 12:00:54.902921  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:00:54.903288  895269 main.go:141] libmachine: (addons-641952) DBG | unable to find current IP address of domain addons-641952 in network mk-addons-641952
	I0224 12:00:54.903315  895269 main.go:141] libmachine: (addons-641952) DBG | I0224 12:00:54.903249  895291 retry.go:31] will retry after 1.793660748s: waiting for domain to come up
	I0224 12:00:56.699212  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:00:56.699841  895269 main.go:141] libmachine: (addons-641952) DBG | unable to find current IP address of domain addons-641952 in network mk-addons-641952
	I0224 12:00:56.699867  895269 main.go:141] libmachine: (addons-641952) DBG | I0224 12:00:56.699803  895291 retry.go:31] will retry after 2.239320299s: waiting for domain to come up
	I0224 12:00:58.942531  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:00:58.942943  895269 main.go:141] libmachine: (addons-641952) DBG | unable to find current IP address of domain addons-641952 in network mk-addons-641952
	I0224 12:00:58.943014  895269 main.go:141] libmachine: (addons-641952) DBG | I0224 12:00:58.942940  895291 retry.go:31] will retry after 3.460648751s: waiting for domain to come up
	I0224 12:01:02.405207  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:02.405597  895269 main.go:141] libmachine: (addons-641952) DBG | unable to find current IP address of domain addons-641952 in network mk-addons-641952
	I0224 12:01:02.405631  895269 main.go:141] libmachine: (addons-641952) DBG | I0224 12:01:02.405562  895291 retry.go:31] will retry after 3.034585151s: waiting for domain to come up
	I0224 12:01:05.442442  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:05.442856  895269 main.go:141] libmachine: (addons-641952) DBG | unable to find current IP address of domain addons-641952 in network mk-addons-641952
	I0224 12:01:05.442887  895269 main.go:141] libmachine: (addons-641952) DBG | I0224 12:01:05.442795  895291 retry.go:31] will retry after 4.990802054s: waiting for domain to come up
	I0224 12:01:10.438774  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:10.439258  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has current primary IP address 192.168.39.150 and MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:10.439298  895269 main.go:141] libmachine: (addons-641952) found domain IP: 192.168.39.150
	I0224 12:01:10.439312  895269 main.go:141] libmachine: (addons-641952) reserving static IP address...
	I0224 12:01:10.439666  895269 main.go:141] libmachine: (addons-641952) DBG | unable to find host DHCP lease matching {name: "addons-641952", mac: "52:54:00:01:24:05", ip: "192.168.39.150"} in network mk-addons-641952
	I0224 12:01:10.523448  895269 main.go:141] libmachine: (addons-641952) DBG | Getting to WaitForSSH function...
	I0224 12:01:10.523481  895269 main.go:141] libmachine: (addons-641952) reserved static IP address 192.168.39.150 for domain addons-641952
	I0224 12:01:10.523520  895269 main.go:141] libmachine: (addons-641952) waiting for SSH...
	I0224 12:01:10.526518  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:10.526799  895269 main.go:141] libmachine: (addons-641952) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:01:24:05", ip: ""} in network mk-addons-641952
	I0224 12:01:10.526831  895269 main.go:141] libmachine: (addons-641952) DBG | unable to find defined IP address of network mk-addons-641952 interface with MAC address 52:54:00:01:24:05
	I0224 12:01:10.527010  895269 main.go:141] libmachine: (addons-641952) DBG | Using SSH client type: external
	I0224 12:01:10.527041  895269 main.go:141] libmachine: (addons-641952) DBG | Using SSH private key: /home/jenkins/minikube-integration/20451-887294/.minikube/machines/addons-641952/id_rsa (-rw-------)
	I0224 12:01:10.527072  895269 main.go:141] libmachine: (addons-641952) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20451-887294/.minikube/machines/addons-641952/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0224 12:01:10.527087  895269 main.go:141] libmachine: (addons-641952) DBG | About to run SSH command:
	I0224 12:01:10.527100  895269 main.go:141] libmachine: (addons-641952) DBG | exit 0
	I0224 12:01:10.531463  895269 main.go:141] libmachine: (addons-641952) DBG | SSH cmd err, output: exit status 255: 
	I0224 12:01:10.531495  895269 main.go:141] libmachine: (addons-641952) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0224 12:01:10.531502  895269 main.go:141] libmachine: (addons-641952) DBG | command : exit 0
	I0224 12:01:10.531507  895269 main.go:141] libmachine: (addons-641952) DBG | err     : exit status 255
	I0224 12:01:10.531514  895269 main.go:141] libmachine: (addons-641952) DBG | output  : 
	I0224 12:01:13.531642  895269 main.go:141] libmachine: (addons-641952) DBG | Getting to WaitForSSH function...
	I0224 12:01:13.534204  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:13.534600  895269 main.go:141] libmachine: (addons-641952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:24:05", ip: ""} in network mk-addons-641952: {Iface:virbr1 ExpiryTime:2025-02-24 13:01:02 +0000 UTC Type:0 Mac:52:54:00:01:24:05 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-641952 Clientid:01:52:54:00:01:24:05}
	I0224 12:01:13.534629  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined IP address 192.168.39.150 and MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:13.534750  895269 main.go:141] libmachine: (addons-641952) DBG | Using SSH client type: external
	I0224 12:01:13.534789  895269 main.go:141] libmachine: (addons-641952) DBG | Using SSH private key: /home/jenkins/minikube-integration/20451-887294/.minikube/machines/addons-641952/id_rsa (-rw-------)
	I0224 12:01:13.534831  895269 main.go:141] libmachine: (addons-641952) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.150 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20451-887294/.minikube/machines/addons-641952/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0224 12:01:13.534852  895269 main.go:141] libmachine: (addons-641952) DBG | About to run SSH command:
	I0224 12:01:13.534867  895269 main.go:141] libmachine: (addons-641952) DBG | exit 0
	I0224 12:01:13.665788  895269 main.go:141] libmachine: (addons-641952) DBG | SSH cmd err, output: <nil>: 
	I0224 12:01:13.666003  895269 main.go:141] libmachine: (addons-641952) KVM machine creation complete
	I0224 12:01:13.666407  895269 main.go:141] libmachine: (addons-641952) Calling .GetConfigRaw
	I0224 12:01:13.667039  895269 main.go:141] libmachine: (addons-641952) Calling .DriverName
	I0224 12:01:13.667266  895269 main.go:141] libmachine: (addons-641952) Calling .DriverName
	I0224 12:01:13.667451  895269 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0224 12:01:13.667467  895269 main.go:141] libmachine: (addons-641952) Calling .GetState
	I0224 12:01:13.668889  895269 main.go:141] libmachine: Detecting operating system of created instance...
	I0224 12:01:13.668908  895269 main.go:141] libmachine: Waiting for SSH to be available...
	I0224 12:01:13.668914  895269 main.go:141] libmachine: Getting to WaitForSSH function...
	I0224 12:01:13.668920  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHHostname
	I0224 12:01:13.671312  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:13.671629  895269 main.go:141] libmachine: (addons-641952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:24:05", ip: ""} in network mk-addons-641952: {Iface:virbr1 ExpiryTime:2025-02-24 13:01:02 +0000 UTC Type:0 Mac:52:54:00:01:24:05 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-641952 Clientid:01:52:54:00:01:24:05}
	I0224 12:01:13.671663  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined IP address 192.168.39.150 and MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:13.671854  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHPort
	I0224 12:01:13.672069  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHKeyPath
	I0224 12:01:13.672329  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHKeyPath
	I0224 12:01:13.672514  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHUsername
	I0224 12:01:13.672714  895269 main.go:141] libmachine: Using SSH client type: native
	I0224 12:01:13.672905  895269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0224 12:01:13.672916  895269 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0224 12:01:13.788968  895269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0224 12:01:13.788996  895269 main.go:141] libmachine: Detecting the provisioner...
	I0224 12:01:13.789007  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHHostname
	I0224 12:01:13.791809  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:13.792177  895269 main.go:141] libmachine: (addons-641952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:24:05", ip: ""} in network mk-addons-641952: {Iface:virbr1 ExpiryTime:2025-02-24 13:01:02 +0000 UTC Type:0 Mac:52:54:00:01:24:05 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-641952 Clientid:01:52:54:00:01:24:05}
	I0224 12:01:13.792204  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined IP address 192.168.39.150 and MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:13.792424  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHPort
	I0224 12:01:13.792643  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHKeyPath
	I0224 12:01:13.792796  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHKeyPath
	I0224 12:01:13.792953  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHUsername
	I0224 12:01:13.793143  895269 main.go:141] libmachine: Using SSH client type: native
	I0224 12:01:13.793361  895269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0224 12:01:13.793373  895269 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0224 12:01:13.906330  895269 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0224 12:01:13.906393  895269 main.go:141] libmachine: found compatible host: buildroot
	I0224 12:01:13.906402  895269 main.go:141] libmachine: Provisioning with buildroot...
	I0224 12:01:13.906412  895269 main.go:141] libmachine: (addons-641952) Calling .GetMachineName
	I0224 12:01:13.906689  895269 buildroot.go:166] provisioning hostname "addons-641952"
	I0224 12:01:13.906728  895269 main.go:141] libmachine: (addons-641952) Calling .GetMachineName
	I0224 12:01:13.906932  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHHostname
	I0224 12:01:13.909734  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:13.910075  895269 main.go:141] libmachine: (addons-641952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:24:05", ip: ""} in network mk-addons-641952: {Iface:virbr1 ExpiryTime:2025-02-24 13:01:02 +0000 UTC Type:0 Mac:52:54:00:01:24:05 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-641952 Clientid:01:52:54:00:01:24:05}
	I0224 12:01:13.910108  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined IP address 192.168.39.150 and MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:13.910324  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHPort
	I0224 12:01:13.910542  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHKeyPath
	I0224 12:01:13.910696  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHKeyPath
	I0224 12:01:13.910773  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHUsername
	I0224 12:01:13.910885  895269 main.go:141] libmachine: Using SSH client type: native
	I0224 12:01:13.911075  895269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0224 12:01:13.911087  895269 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-641952 && echo "addons-641952" | sudo tee /etc/hostname
	I0224 12:01:14.041099  895269 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-641952
	
	I0224 12:01:14.041140  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHHostname
	I0224 12:01:14.044009  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:14.044471  895269 main.go:141] libmachine: (addons-641952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:24:05", ip: ""} in network mk-addons-641952: {Iface:virbr1 ExpiryTime:2025-02-24 13:01:02 +0000 UTC Type:0 Mac:52:54:00:01:24:05 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-641952 Clientid:01:52:54:00:01:24:05}
	I0224 12:01:14.044522  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined IP address 192.168.39.150 and MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:14.044703  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHPort
	I0224 12:01:14.044892  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHKeyPath
	I0224 12:01:14.045065  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHKeyPath
	I0224 12:01:14.045288  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHUsername
	I0224 12:01:14.045500  895269 main.go:141] libmachine: Using SSH client type: native
	I0224 12:01:14.045676  895269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0224 12:01:14.045692  895269 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-641952' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-641952/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-641952' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0224 12:01:14.170996  895269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0224 12:01:14.171031  895269 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20451-887294/.minikube CaCertPath:/home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20451-887294/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20451-887294/.minikube}
	I0224 12:01:14.171082  895269 buildroot.go:174] setting up certificates
	I0224 12:01:14.171098  895269 provision.go:84] configureAuth start
	I0224 12:01:14.171110  895269 main.go:141] libmachine: (addons-641952) Calling .GetMachineName
	I0224 12:01:14.171428  895269 main.go:141] libmachine: (addons-641952) Calling .GetIP
	I0224 12:01:14.174152  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:14.174502  895269 main.go:141] libmachine: (addons-641952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:24:05", ip: ""} in network mk-addons-641952: {Iface:virbr1 ExpiryTime:2025-02-24 13:01:02 +0000 UTC Type:0 Mac:52:54:00:01:24:05 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-641952 Clientid:01:52:54:00:01:24:05}
	I0224 12:01:14.174531  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined IP address 192.168.39.150 and MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:14.174720  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHHostname
	I0224 12:01:14.176883  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:14.177207  895269 main.go:141] libmachine: (addons-641952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:24:05", ip: ""} in network mk-addons-641952: {Iface:virbr1 ExpiryTime:2025-02-24 13:01:02 +0000 UTC Type:0 Mac:52:54:00:01:24:05 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-641952 Clientid:01:52:54:00:01:24:05}
	I0224 12:01:14.177231  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined IP address 192.168.39.150 and MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:14.177359  895269 provision.go:143] copyHostCerts
	I0224 12:01:14.177463  895269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20451-887294/.minikube/ca.pem (1082 bytes)
	I0224 12:01:14.177617  895269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20451-887294/.minikube/cert.pem (1123 bytes)
	I0224 12:01:14.177711  895269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20451-887294/.minikube/key.pem (1679 bytes)
	I0224 12:01:14.177787  895269 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20451-887294/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca-key.pem org=jenkins.addons-641952 san=[127.0.0.1 192.168.39.150 addons-641952 localhost minikube]
	I0224 12:01:14.491076  895269 provision.go:177] copyRemoteCerts
	I0224 12:01:14.491160  895269 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0224 12:01:14.491206  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHHostname
	I0224 12:01:14.493811  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:14.494098  895269 main.go:141] libmachine: (addons-641952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:24:05", ip: ""} in network mk-addons-641952: {Iface:virbr1 ExpiryTime:2025-02-24 13:01:02 +0000 UTC Type:0 Mac:52:54:00:01:24:05 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-641952 Clientid:01:52:54:00:01:24:05}
	I0224 12:01:14.494127  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined IP address 192.168.39.150 and MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:14.494283  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHPort
	I0224 12:01:14.494529  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHKeyPath
	I0224 12:01:14.494687  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHUsername
	I0224 12:01:14.494829  895269 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/addons-641952/id_rsa Username:docker}
	I0224 12:01:14.584093  895269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0224 12:01:14.611291  895269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0224 12:01:14.638735  895269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0224 12:01:14.666602  895269 provision.go:87] duration metric: took 495.481589ms to configureAuth
	I0224 12:01:14.666649  895269 buildroot.go:189] setting minikube options for container-runtime
	I0224 12:01:14.666960  895269 config.go:182] Loaded profile config "addons-641952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0224 12:01:14.667081  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHHostname
	I0224 12:01:14.670935  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:14.671317  895269 main.go:141] libmachine: (addons-641952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:24:05", ip: ""} in network mk-addons-641952: {Iface:virbr1 ExpiryTime:2025-02-24 13:01:02 +0000 UTC Type:0 Mac:52:54:00:01:24:05 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-641952 Clientid:01:52:54:00:01:24:05}
	I0224 12:01:14.671350  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined IP address 192.168.39.150 and MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:14.671533  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHPort
	I0224 12:01:14.671781  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHKeyPath
	I0224 12:01:14.671939  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHKeyPath
	I0224 12:01:14.672089  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHUsername
	I0224 12:01:14.672315  895269 main.go:141] libmachine: Using SSH client type: native
	I0224 12:01:14.672505  895269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0224 12:01:14.672521  895269 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0224 12:01:14.923431  895269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0224 12:01:14.923473  895269 main.go:141] libmachine: Checking connection to Docker...
	I0224 12:01:14.923482  895269 main.go:141] libmachine: (addons-641952) Calling .GetURL
	I0224 12:01:14.924872  895269 main.go:141] libmachine: (addons-641952) DBG | using libvirt version 6000000
	I0224 12:01:14.927163  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:14.927476  895269 main.go:141] libmachine: (addons-641952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:24:05", ip: ""} in network mk-addons-641952: {Iface:virbr1 ExpiryTime:2025-02-24 13:01:02 +0000 UTC Type:0 Mac:52:54:00:01:24:05 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-641952 Clientid:01:52:54:00:01:24:05}
	I0224 12:01:14.927508  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined IP address 192.168.39.150 and MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:14.927746  895269 main.go:141] libmachine: Docker is up and running!
	I0224 12:01:14.927777  895269 main.go:141] libmachine: Reticulating splines...
	I0224 12:01:14.927787  895269 client.go:171] duration metric: took 28.736244527s to LocalClient.Create
	I0224 12:01:14.927820  895269 start.go:167] duration metric: took 28.736329698s to libmachine.API.Create "addons-641952"
	I0224 12:01:14.927832  895269 start.go:293] postStartSetup for "addons-641952" (driver="kvm2")
	I0224 12:01:14.927841  895269 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0224 12:01:14.927857  895269 main.go:141] libmachine: (addons-641952) Calling .DriverName
	I0224 12:01:14.928120  895269 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0224 12:01:14.928145  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHHostname
	I0224 12:01:14.930483  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:14.930810  895269 main.go:141] libmachine: (addons-641952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:24:05", ip: ""} in network mk-addons-641952: {Iface:virbr1 ExpiryTime:2025-02-24 13:01:02 +0000 UTC Type:0 Mac:52:54:00:01:24:05 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-641952 Clientid:01:52:54:00:01:24:05}
	I0224 12:01:14.930845  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined IP address 192.168.39.150 and MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:14.930965  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHPort
	I0224 12:01:14.931206  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHKeyPath
	I0224 12:01:14.931385  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHUsername
	I0224 12:01:14.931535  895269 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/addons-641952/id_rsa Username:docker}
	I0224 12:01:15.020564  895269 ssh_runner.go:195] Run: cat /etc/os-release
	I0224 12:01:15.025509  895269 info.go:137] Remote host: Buildroot 2023.02.9
	I0224 12:01:15.025542  895269 filesync.go:126] Scanning /home/jenkins/minikube-integration/20451-887294/.minikube/addons for local assets ...
	I0224 12:01:15.025657  895269 filesync.go:126] Scanning /home/jenkins/minikube-integration/20451-887294/.minikube/files for local assets ...
	I0224 12:01:15.025700  895269 start.go:296] duration metric: took 97.862021ms for postStartSetup
	I0224 12:01:15.025749  895269 main.go:141] libmachine: (addons-641952) Calling .GetConfigRaw
	I0224 12:01:15.026373  895269 main.go:141] libmachine: (addons-641952) Calling .GetIP
	I0224 12:01:15.029173  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:15.029545  895269 main.go:141] libmachine: (addons-641952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:24:05", ip: ""} in network mk-addons-641952: {Iface:virbr1 ExpiryTime:2025-02-24 13:01:02 +0000 UTC Type:0 Mac:52:54:00:01:24:05 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-641952 Clientid:01:52:54:00:01:24:05}
	I0224 12:01:15.029571  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined IP address 192.168.39.150 and MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:15.029898  895269 profile.go:143] Saving config to /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/config.json ...
	I0224 12:01:15.030085  895269 start.go:128] duration metric: took 28.859085798s to createHost
	I0224 12:01:15.030114  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHHostname
	I0224 12:01:15.032249  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:15.032532  895269 main.go:141] libmachine: (addons-641952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:24:05", ip: ""} in network mk-addons-641952: {Iface:virbr1 ExpiryTime:2025-02-24 13:01:02 +0000 UTC Type:0 Mac:52:54:00:01:24:05 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-641952 Clientid:01:52:54:00:01:24:05}
	I0224 12:01:15.032565  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined IP address 192.168.39.150 and MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:15.032710  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHPort
	I0224 12:01:15.032960  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHKeyPath
	I0224 12:01:15.033126  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHKeyPath
	I0224 12:01:15.033255  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHUsername
	I0224 12:01:15.033417  895269 main.go:141] libmachine: Using SSH client type: native
	I0224 12:01:15.033591  895269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0224 12:01:15.033605  895269 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0224 12:01:15.150811  895269 main.go:141] libmachine: SSH cmd err, output: <nil>: 1740398475.127030330
	
	I0224 12:01:15.150855  895269 fix.go:216] guest clock: 1740398475.127030330
	I0224 12:01:15.150863  895269 fix.go:229] Guest: 2025-02-24 12:01:15.12703033 +0000 UTC Remote: 2025-02-24 12:01:15.030101406 +0000 UTC m=+28.973959585 (delta=96.928924ms)
	I0224 12:01:15.150933  895269 fix.go:200] guest clock delta is within tolerance: 96.928924ms
	I0224 12:01:15.150944  895269 start.go:83] releasing machines lock for "addons-641952", held for 28.980046464s
	I0224 12:01:15.150969  895269 main.go:141] libmachine: (addons-641952) Calling .DriverName
	I0224 12:01:15.151290  895269 main.go:141] libmachine: (addons-641952) Calling .GetIP
	I0224 12:01:15.153774  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:15.154052  895269 main.go:141] libmachine: (addons-641952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:24:05", ip: ""} in network mk-addons-641952: {Iface:virbr1 ExpiryTime:2025-02-24 13:01:02 +0000 UTC Type:0 Mac:52:54:00:01:24:05 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-641952 Clientid:01:52:54:00:01:24:05}
	I0224 12:01:15.154082  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined IP address 192.168.39.150 and MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:15.154276  895269 main.go:141] libmachine: (addons-641952) Calling .DriverName
	I0224 12:01:15.154802  895269 main.go:141] libmachine: (addons-641952) Calling .DriverName
	I0224 12:01:15.155006  895269 main.go:141] libmachine: (addons-641952) Calling .DriverName
	I0224 12:01:15.155122  895269 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0224 12:01:15.155171  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHHostname
	I0224 12:01:15.155206  895269 ssh_runner.go:195] Run: cat /version.json
	I0224 12:01:15.155231  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHHostname
	I0224 12:01:15.157866  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:15.158160  895269 main.go:141] libmachine: (addons-641952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:24:05", ip: ""} in network mk-addons-641952: {Iface:virbr1 ExpiryTime:2025-02-24 13:01:02 +0000 UTC Type:0 Mac:52:54:00:01:24:05 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-641952 Clientid:01:52:54:00:01:24:05}
	I0224 12:01:15.158189  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined IP address 192.168.39.150 and MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:15.158293  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:15.158354  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHPort
	I0224 12:01:15.158514  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHKeyPath
	I0224 12:01:15.158641  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHUsername
	I0224 12:01:15.158648  895269 main.go:141] libmachine: (addons-641952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:24:05", ip: ""} in network mk-addons-641952: {Iface:virbr1 ExpiryTime:2025-02-24 13:01:02 +0000 UTC Type:0 Mac:52:54:00:01:24:05 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-641952 Clientid:01:52:54:00:01:24:05}
	I0224 12:01:15.158672  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined IP address 192.168.39.150 and MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:15.158792  895269 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/addons-641952/id_rsa Username:docker}
	I0224 12:01:15.158837  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHPort
	I0224 12:01:15.158955  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHKeyPath
	I0224 12:01:15.159096  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHUsername
	I0224 12:01:15.159267  895269 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/addons-641952/id_rsa Username:docker}
	I0224 12:01:15.267708  895269 ssh_runner.go:195] Run: systemctl --version
	I0224 12:01:15.274426  895269 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0224 12:01:15.442568  895269 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0224 12:01:15.449147  895269 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0224 12:01:15.449234  895269 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0224 12:01:15.469116  895269 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0224 12:01:15.469153  895269 start.go:495] detecting cgroup driver to use...
	I0224 12:01:15.469272  895269 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0224 12:01:15.488177  895269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0224 12:01:15.504711  895269 docker.go:217] disabling cri-docker service (if available) ...
	I0224 12:01:15.504791  895269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0224 12:01:15.520925  895269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0224 12:01:15.539141  895269 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0224 12:01:15.665922  895269 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0224 12:01:15.815619  895269 docker.go:233] disabling docker service ...
	I0224 12:01:15.815707  895269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0224 12:01:15.831540  895269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0224 12:01:15.846276  895269 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0224 12:01:15.991002  895269 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0224 12:01:16.117233  895269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0224 12:01:16.132455  895269 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 12:01:16.152141  895269 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0224 12:01:16.152221  895269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 12:01:16.163429  895269 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0224 12:01:16.163510  895269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 12:01:16.174884  895269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 12:01:16.186025  895269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 12:01:16.197349  895269 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0224 12:01:16.208930  895269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 12:01:16.220061  895269 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 12:01:16.240119  895269 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 12:01:16.253003  895269 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0224 12:01:16.264103  895269 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0224 12:01:16.264207  895269 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0224 12:01:16.280394  895269 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0224 12:01:16.291162  895269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 12:01:16.411494  895269 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0224 12:01:16.514509  895269 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0224 12:01:16.514629  895269 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0224 12:01:16.520310  895269 start.go:563] Will wait 60s for crictl version
	I0224 12:01:16.520385  895269 ssh_runner.go:195] Run: which crictl
	I0224 12:01:16.524908  895269 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0224 12:01:16.568042  895269 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0224 12:01:16.568171  895269 ssh_runner.go:195] Run: crio --version
	I0224 12:01:16.597948  895269 ssh_runner.go:195] Run: crio --version
	I0224 12:01:16.629219  895269 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0224 12:01:16.630625  895269 main.go:141] libmachine: (addons-641952) Calling .GetIP
	I0224 12:01:16.633341  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:16.633678  895269 main.go:141] libmachine: (addons-641952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:24:05", ip: ""} in network mk-addons-641952: {Iface:virbr1 ExpiryTime:2025-02-24 13:01:02 +0000 UTC Type:0 Mac:52:54:00:01:24:05 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-641952 Clientid:01:52:54:00:01:24:05}
	I0224 12:01:16.633705  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined IP address 192.168.39.150 and MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:16.633910  895269 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0224 12:01:16.638305  895269 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 12:01:16.652316  895269 kubeadm.go:883] updating cluster {Name:addons-641952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-641952 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0224 12:01:16.652453  895269 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0224 12:01:16.652504  895269 ssh_runner.go:195] Run: sudo crictl images --output json
	I0224 12:01:16.688192  895269 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0224 12:01:16.688280  895269 ssh_runner.go:195] Run: which lz4
	I0224 12:01:16.692583  895269 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0224 12:01:16.697001  895269 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0224 12:01:16.697043  895269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0224 12:01:18.135043  895269 crio.go:462] duration metric: took 1.442491793s to copy over tarball
	I0224 12:01:18.135156  895269 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0224 12:01:20.496037  895269 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.360846618s)
	I0224 12:01:20.496072  895269 crio.go:469] duration metric: took 2.360993073s to extract the tarball
	I0224 12:01:20.496084  895269 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0224 12:01:20.535720  895269 ssh_runner.go:195] Run: sudo crictl images --output json
	I0224 12:01:20.578457  895269 crio.go:514] all images are preloaded for cri-o runtime.
	I0224 12:01:20.578486  895269 cache_images.go:84] Images are preloaded, skipping loading
	I0224 12:01:20.578497  895269 kubeadm.go:934] updating node { 192.168.39.150 8443 v1.32.2 crio true true} ...
	I0224 12:01:20.578640  895269 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-641952 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.150
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:addons-641952 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0224 12:01:20.578730  895269 ssh_runner.go:195] Run: crio config
	I0224 12:01:20.631411  895269 cni.go:84] Creating CNI manager for ""
	I0224 12:01:20.631443  895269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0224 12:01:20.631458  895269 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0224 12:01:20.631480  895269 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.150 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-641952 NodeName:addons-641952 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.150"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.150 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0224 12:01:20.631628  895269 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.150
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-641952"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.150"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.150"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0224 12:01:20.631722  895269 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0224 12:01:20.642349  895269 binaries.go:44] Found k8s binaries, skipping transfer
	I0224 12:01:20.642422  895269 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0224 12:01:20.652631  895269 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0224 12:01:20.670584  895269 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0224 12:01:20.688450  895269 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0224 12:01:20.706417  895269 ssh_runner.go:195] Run: grep 192.168.39.150	control-plane.minikube.internal$ /etc/hosts
	I0224 12:01:20.710594  895269 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.150	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 12:01:20.724096  895269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 12:01:20.852757  895269 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0224 12:01:20.871356  895269 certs.go:68] Setting up /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952 for IP: 192.168.39.150
	I0224 12:01:20.871391  895269 certs.go:194] generating shared ca certs ...
	I0224 12:01:20.871419  895269 certs.go:226] acquiring lock for ca certs: {Name:mk38777c6b180f63d1816020cff79a01106ddf33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 12:01:20.871617  895269 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20451-887294/.minikube/ca.key
	I0224 12:01:21.156148  895269 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20451-887294/.minikube/ca.crt ...
	I0224 12:01:21.156187  895269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-887294/.minikube/ca.crt: {Name:mk285a6ac63be3c2292ae3d442e1628ba083a285 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 12:01:21.156381  895269 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20451-887294/.minikube/ca.key ...
	I0224 12:01:21.156395  895269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-887294/.minikube/ca.key: {Name:mk48978dd0a57738f87214feb46d162e341871a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 12:01:21.156472  895269 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20451-887294/.minikube/proxy-client-ca.key
	I0224 12:01:21.356804  895269 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20451-887294/.minikube/proxy-client-ca.crt ...
	I0224 12:01:21.356842  895269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-887294/.minikube/proxy-client-ca.crt: {Name:mk247497a452b81a7b0a592ca7c9ba7384ab79a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 12:01:21.357003  895269 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20451-887294/.minikube/proxy-client-ca.key ...
	I0224 12:01:21.357014  895269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-887294/.minikube/proxy-client-ca.key: {Name:mk60c262b9b30f27e8b226efa2cf7662f3f4fac0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 12:01:21.357096  895269 certs.go:256] generating profile certs ...
	I0224 12:01:21.357167  895269 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/client.key
	I0224 12:01:21.357179  895269 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/client.crt with IP's: []
	I0224 12:01:21.505208  895269 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/client.crt ...
	I0224 12:01:21.505247  895269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/client.crt: {Name:mkd6db0b3bc1d8abd85f29bcc9ed90b313a495a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 12:01:21.505442  895269 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/client.key ...
	I0224 12:01:21.505454  895269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/client.key: {Name:mkf5367c99099af74d0893ce4b97c3678a2cd3d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 12:01:21.505532  895269 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/apiserver.key.fe8e52ca
	I0224 12:01:21.505553  895269 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/apiserver.crt.fe8e52ca with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.150]
	I0224 12:01:21.677929  895269 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/apiserver.crt.fe8e52ca ...
	I0224 12:01:21.677968  895269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/apiserver.crt.fe8e52ca: {Name:mk613d2c332e5098ccce13bb46579b98eaef074a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 12:01:21.678140  895269 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/apiserver.key.fe8e52ca ...
	I0224 12:01:21.678153  895269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/apiserver.key.fe8e52ca: {Name:mk5436cf4a6d22d118084c9116fc1464f6d48df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 12:01:21.678229  895269 certs.go:381] copying /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/apiserver.crt.fe8e52ca -> /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/apiserver.crt
	I0224 12:01:21.678306  895269 certs.go:385] copying /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/apiserver.key.fe8e52ca -> /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/apiserver.key
	I0224 12:01:21.678354  895269 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/proxy-client.key
	I0224 12:01:21.678380  895269 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/proxy-client.crt with IP's: []
	I0224 12:01:21.786771  895269 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/proxy-client.crt ...
	I0224 12:01:21.786808  895269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/proxy-client.crt: {Name:mkbc5f3114b8c88fbcbc47a49af85c840cb655e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 12:01:21.786984  895269 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/proxy-client.key ...
	I0224 12:01:21.786998  895269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/proxy-client.key: {Name:mk4f9012e0c191c05468b8e2eb699806f358a4bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 12:01:21.787184  895269 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca-key.pem (1679 bytes)
	I0224 12:01:21.787223  895269 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem (1082 bytes)
	I0224 12:01:21.787250  895269 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/cert.pem (1123 bytes)
	I0224 12:01:21.787276  895269 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/key.pem (1679 bytes)
	I0224 12:01:21.787882  895269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0224 12:01:21.829479  895269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0224 12:01:21.873441  895269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0224 12:01:21.902534  895269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0224 12:01:21.929161  895269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0224 12:01:21.954851  895269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0224 12:01:21.981743  895269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0224 12:01:22.009321  895269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0224 12:01:22.036249  895269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0224 12:01:22.062081  895269 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0224 12:01:22.082927  895269 ssh_runner.go:195] Run: openssl version
	I0224 12:01:22.089720  895269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0224 12:01:22.102309  895269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0224 12:01:22.107587  895269 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 24 12:01 /usr/share/ca-certificates/minikubeCA.pem
	I0224 12:01:22.107663  895269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0224 12:01:22.114316  895269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0224 12:01:22.126913  895269 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0224 12:01:22.131656  895269 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0224 12:01:22.131721  895269 kubeadm.go:392] StartCluster: {Name:addons-641952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-641952 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0224 12:01:22.131800  895269 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0224 12:01:22.131848  895269 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0224 12:01:22.181593  895269 cri.go:89] found id: ""
	I0224 12:01:22.181682  895269 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0224 12:01:22.192687  895269 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0224 12:01:22.203855  895269 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0224 12:01:22.214861  895269 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0224 12:01:22.214893  895269 kubeadm.go:157] found existing configuration files:
	
	I0224 12:01:22.214942  895269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0224 12:01:22.225564  895269 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0224 12:01:22.225631  895269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0224 12:01:22.236110  895269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0224 12:01:22.246054  895269 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0224 12:01:22.246136  895269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0224 12:01:22.257080  895269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0224 12:01:22.267729  895269 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0224 12:01:22.267809  895269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0224 12:01:22.278508  895269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0224 12:01:22.288773  895269 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0224 12:01:22.288854  895269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0224 12:01:22.299345  895269 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0224 12:01:22.462630  895269 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0224 12:01:33.282171  895269 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0224 12:01:33.282239  895269 kubeadm.go:310] [preflight] Running pre-flight checks
	I0224 12:01:33.282334  895269 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0224 12:01:33.282503  895269 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0224 12:01:33.282675  895269 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0224 12:01:33.282776  895269 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0224 12:01:33.284926  895269 out.go:235]   - Generating certificates and keys ...
	I0224 12:01:33.285065  895269 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0224 12:01:33.285142  895269 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0224 12:01:33.285274  895269 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0224 12:01:33.285400  895269 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0224 12:01:33.285503  895269 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0224 12:01:33.285577  895269 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0224 12:01:33.285659  895269 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0224 12:01:33.285840  895269 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-641952 localhost] and IPs [192.168.39.150 127.0.0.1 ::1]
	I0224 12:01:33.285933  895269 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0224 12:01:33.286088  895269 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-641952 localhost] and IPs [192.168.39.150 127.0.0.1 ::1]
	I0224 12:01:33.286211  895269 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0224 12:01:33.286301  895269 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0224 12:01:33.286361  895269 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0224 12:01:33.286435  895269 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0224 12:01:33.286516  895269 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0224 12:01:33.286597  895269 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0224 12:01:33.286681  895269 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0224 12:01:33.286804  895269 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0224 12:01:33.286893  895269 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0224 12:01:33.287017  895269 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0224 12:01:33.287150  895269 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0224 12:01:33.288757  895269 out.go:235]   - Booting up control plane ...
	I0224 12:01:33.288865  895269 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0224 12:01:33.288955  895269 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0224 12:01:33.289036  895269 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0224 12:01:33.289179  895269 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0224 12:01:33.289333  895269 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0224 12:01:33.289392  895269 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0224 12:01:33.289545  895269 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0224 12:01:33.289672  895269 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0224 12:01:33.289753  895269 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002004707s
	I0224 12:01:33.289854  895269 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0224 12:01:33.289940  895269 kubeadm.go:310] [api-check] The API server is healthy after 5.50261817s
	I0224 12:01:33.290070  895269 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0224 12:01:33.290226  895269 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0224 12:01:33.290287  895269 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0224 12:01:33.290446  895269 kubeadm.go:310] [mark-control-plane] Marking the node addons-641952 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0224 12:01:33.290499  895269 kubeadm.go:310] [bootstrap-token] Using token: mkc0ti.etrssuol35as1rbz
	I0224 12:01:33.292041  895269 out.go:235]   - Configuring RBAC rules ...
	I0224 12:01:33.292195  895269 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0224 12:01:33.292267  895269 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0224 12:01:33.292392  895269 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0224 12:01:33.292504  895269 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0224 12:01:33.292600  895269 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0224 12:01:33.292740  895269 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0224 12:01:33.292878  895269 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0224 12:01:33.292946  895269 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0224 12:01:33.293014  895269 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0224 12:01:33.293023  895269 kubeadm.go:310] 
	I0224 12:01:33.293108  895269 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0224 12:01:33.293117  895269 kubeadm.go:310] 
	I0224 12:01:33.293227  895269 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0224 12:01:33.293237  895269 kubeadm.go:310] 
	I0224 12:01:33.293275  895269 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0224 12:01:33.293374  895269 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0224 12:01:33.293457  895269 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0224 12:01:33.293467  895269 kubeadm.go:310] 
	I0224 12:01:33.293550  895269 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0224 12:01:33.293559  895269 kubeadm.go:310] 
	I0224 12:01:33.293625  895269 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0224 12:01:33.293637  895269 kubeadm.go:310] 
	I0224 12:01:33.293715  895269 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0224 12:01:33.293844  895269 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0224 12:01:33.293957  895269 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0224 12:01:33.293966  895269 kubeadm.go:310] 
	I0224 12:01:33.294066  895269 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0224 12:01:33.294174  895269 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0224 12:01:33.294182  895269 kubeadm.go:310] 
	I0224 12:01:33.294287  895269 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token mkc0ti.etrssuol35as1rbz \
	I0224 12:01:33.294412  895269 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:25cdff1b144f9bdda2a397f8df58979800593c9a9a7e9fabc93239253c272d6f \
	I0224 12:01:33.294455  895269 kubeadm.go:310] 	--control-plane 
	I0224 12:01:33.294464  895269 kubeadm.go:310] 
	I0224 12:01:33.294562  895269 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0224 12:01:33.294571  895269 kubeadm.go:310] 
	I0224 12:01:33.294692  895269 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token mkc0ti.etrssuol35as1rbz \
	I0224 12:01:33.294798  895269 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:25cdff1b144f9bdda2a397f8df58979800593c9a9a7e9fabc93239253c272d6f 
	I0224 12:01:33.294819  895269 cni.go:84] Creating CNI manager for ""
	I0224 12:01:33.294830  895269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0224 12:01:33.296584  895269 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0224 12:01:33.297904  895269 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0224 12:01:33.310925  895269 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0224 12:01:33.335659  895269 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0224 12:01:33.335718  895269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 12:01:33.335788  895269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-641952 minikube.k8s.io/updated_at=2025_02_24T12_01_33_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=b76650f53499dbb51707efa4a87e94b72d747650 minikube.k8s.io/name=addons-641952 minikube.k8s.io/primary=true
	I0224 12:01:33.534191  895269 ops.go:34] apiserver oom_adj: -16
	I0224 12:01:33.534401  895269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 12:01:34.034798  895269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 12:01:34.534789  895269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 12:01:35.035185  895269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 12:01:35.535051  895269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 12:01:36.035416  895269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 12:01:36.535284  895269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 12:01:37.034562  895269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 12:01:37.535279  895269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 12:01:37.653345  895269 kubeadm.go:1113] duration metric: took 4.317688782s to wait for elevateKubeSystemPrivileges
	I0224 12:01:37.653385  895269 kubeadm.go:394] duration metric: took 15.521670573s to StartCluster
	I0224 12:01:37.653408  895269 settings.go:142] acquiring lock: {Name:mk663e441d32b04abcccdab86db3e15276e74de8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 12:01:37.653538  895269 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20451-887294/kubeconfig
	I0224 12:01:37.654018  895269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-887294/kubeconfig: {Name:mk0122b69f41cd40d5267f436266ccce22ce5ef2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 12:01:37.654223  895269 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0224 12:01:37.654231  895269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0224 12:01:37.654252  895269 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0224 12:01:37.654385  895269 addons.go:69] Setting yakd=true in profile "addons-641952"
	I0224 12:01:37.654401  895269 addons.go:238] Setting addon yakd=true in "addons-641952"
	I0224 12:01:37.654420  895269 addons.go:69] Setting inspektor-gadget=true in profile "addons-641952"
	I0224 12:01:37.654440  895269 host.go:66] Checking if "addons-641952" exists ...
	I0224 12:01:37.654440  895269 addons.go:238] Setting addon inspektor-gadget=true in "addons-641952"
	I0224 12:01:37.654466  895269 addons.go:69] Setting storage-provisioner=true in profile "addons-641952"
	I0224 12:01:37.654485  895269 host.go:66] Checking if "addons-641952" exists ...
	I0224 12:01:37.654490  895269 addons.go:238] Setting addon storage-provisioner=true in "addons-641952"
	I0224 12:01:37.654481  895269 addons.go:69] Setting volcano=true in profile "addons-641952"
	I0224 12:01:37.654508  895269 addons.go:238] Setting addon volcano=true in "addons-641952"
	I0224 12:01:37.654509  895269 addons.go:69] Setting default-storageclass=true in profile "addons-641952"
	I0224 12:01:37.654513  895269 addons.go:69] Setting volumesnapshots=true in profile "addons-641952"
	I0224 12:01:37.654525  895269 host.go:66] Checking if "addons-641952" exists ...
	I0224 12:01:37.654532  895269 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-641952"
	I0224 12:01:37.654536  895269 host.go:66] Checking if "addons-641952" exists ...
	I0224 12:01:37.654538  895269 addons.go:238] Setting addon volumesnapshots=true in "addons-641952"
	I0224 12:01:37.654567  895269 host.go:66] Checking if "addons-641952" exists ...
	I0224 12:01:37.654973  895269 addons.go:69] Setting gcp-auth=true in profile "addons-641952"
	I0224 12:01:37.654980  895269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:01:37.654987  895269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:01:37.654489  895269 config.go:182] Loaded profile config "addons-641952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0224 12:01:37.655022  895269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:01:37.655024  895269 mustload.go:65] Loading cluster: addons-641952
	I0224 12:01:37.655027  895269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:01:37.655048  895269 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-641952"
	I0224 12:01:37.655049  895269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:01:37.655054  895269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:01:37.655062  895269 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-641952"
	I0224 12:01:37.655084  895269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:01:37.655092  895269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:01:37.655093  895269 addons.go:69] Setting cloud-spanner=true in profile "addons-641952"
	I0224 12:01:37.655105  895269 addons.go:238] Setting addon cloud-spanner=true in "addons-641952"
	I0224 12:01:37.655127  895269 host.go:66] Checking if "addons-641952" exists ...
	I0224 12:01:37.655143  895269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:01:37.655161  895269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:01:37.655190  895269 config.go:182] Loaded profile config "addons-641952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0224 12:01:37.655404  895269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:01:37.655435  895269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:01:37.655484  895269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:01:37.655513  895269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:01:37.655604  895269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:01:37.655630  895269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:01:37.655650  895269 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-641952"
	I0224 12:01:37.655669  895269 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-641952"
	I0224 12:01:37.655698  895269 host.go:66] Checking if "addons-641952" exists ...
	I0224 12:01:37.655876  895269 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-641952"
	I0224 12:01:37.655899  895269 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-641952"
	I0224 12:01:37.655926  895269 host.go:66] Checking if "addons-641952" exists ...
	I0224 12:01:37.656036  895269 addons.go:69] Setting ingress=true in profile "addons-641952"
	I0224 12:01:37.656051  895269 addons.go:238] Setting addon ingress=true in "addons-641952"
	I0224 12:01:37.656064  895269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:01:37.656081  895269 host.go:66] Checking if "addons-641952" exists ...
	I0224 12:01:37.656093  895269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:01:37.656316  895269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:01:37.656344  895269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:01:37.656432  895269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:01:37.656433  895269 addons.go:69] Setting registry=true in profile "addons-641952"
	I0224 12:01:37.656449  895269 addons.go:238] Setting addon registry=true in "addons-641952"
	I0224 12:01:37.656464  895269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:01:37.656489  895269 host.go:66] Checking if "addons-641952" exists ...
	I0224 12:01:37.656886  895269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:01:37.656931  895269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:01:37.658512  895269 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-641952"
	I0224 12:01:37.658570  895269 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-641952"
	I0224 12:01:37.658599  895269 host.go:66] Checking if "addons-641952" exists ...
	I0224 12:01:37.658601  895269 addons.go:69] Setting metrics-server=true in profile "addons-641952"
	I0224 12:01:37.658623  895269 addons.go:238] Setting addon metrics-server=true in "addons-641952"
	I0224 12:01:37.658657  895269 host.go:66] Checking if "addons-641952" exists ...
	I0224 12:01:37.659252  895269 out.go:177] * Verifying Kubernetes components...
	I0224 12:01:37.661034  895269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 12:01:37.661160  895269 addons.go:69] Setting ingress-dns=true in profile "addons-641952"
	I0224 12:01:37.661186  895269 addons.go:238] Setting addon ingress-dns=true in "addons-641952"
	I0224 12:01:37.661228  895269 host.go:66] Checking if "addons-641952" exists ...
	I0224 12:01:37.664295  895269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:01:37.664388  895269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:01:37.677590  895269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45711
	I0224 12:01:37.678083  895269 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:01:37.678704  895269 main.go:141] libmachine: Using API Version  1
	I0224 12:01:37.678732  895269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:01:37.678828  895269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37985
	I0224 12:01:37.679103  895269 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:01:37.679307  895269 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:01:37.679407  895269 main.go:141] libmachine: (addons-641952) Calling .GetState
	I0224 12:01:37.679964  895269 main.go:141] libmachine: Using API Version  1
	I0224 12:01:37.679983  895269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:01:37.680410  895269 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:01:37.681012  895269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:01:37.681057  895269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:01:37.681832  895269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:01:37.681845  895269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:01:37.681873  895269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:01:37.681887  895269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:01:37.684717  895269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:01:37.684767  895269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:01:37.685739  895269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40487
	I0224 12:01:37.687217  895269 addons.go:238] Setting addon default-storageclass=true in "addons-641952"
	I0224 12:01:37.687272  895269 host.go:66] Checking if "addons-641952" exists ...
	I0224 12:01:37.687667  895269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:01:37.687749  895269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:01:37.688363  895269 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:01:37.688994  895269 main.go:141] libmachine: Using API Version  1
	I0224 12:01:37.689016  895269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:01:37.691177  895269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42683
	I0224 12:01:37.691569  895269 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:01:37.691784  895269 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:01:37.692347  895269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:01:37.692387  895269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:01:37.692772  895269 main.go:141] libmachine: Using API Version  1
	I0224 12:01:37.692790  895269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:01:37.693288  895269 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:01:37.693511  895269 main.go:141] libmachine: (addons-641952) Calling .GetState
	I0224 12:01:37.694823  895269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35825
	I0224 12:01:37.695341  895269 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:01:37.695453  895269 host.go:66] Checking if "addons-641952" exists ...
	I0224 12:01:37.695849  895269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:01:37.695883  895269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:01:37.696552  895269 main.go:141] libmachine: Using API Version  1
	I0224 12:01:37.696569  895269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:01:37.696967  895269 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:01:37.697555  895269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:01:37.697597  895269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:01:37.704837  895269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45141
	I0224 12:01:37.704907  895269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38087
	I0224 12:01:37.705442  895269 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:01:37.705594  895269 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:01:37.706239  895269 main.go:141] libmachine: Using API Version  1
	I0224 12:01:37.706260  895269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:01:37.706429  895269 main.go:141] libmachine: Using API Version  1
	I0224 12:01:37.706442  895269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:01:37.706800  895269 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:01:37.706855  895269 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:01:37.712625  895269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:01:37.717402  895269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:01:37.717788  895269 main.go:141] libmachine: (addons-641952) Calling .GetState
	I0224 12:01:37.720011  895269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39305
	I0224 12:01:37.721386  895269 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-641952"
	I0224 12:01:37.723328  895269 host.go:66] Checking if "addons-641952" exists ...
	I0224 12:01:37.723756  895269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:01:37.723811  895269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:01:37.725143  895269 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:01:37.726422  895269 main.go:141] libmachine: Using API Version  1
	I0224 12:01:37.726448  895269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:01:37.727048  895269 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:01:37.727270  895269 main.go:141] libmachine: (addons-641952) Calling .GetState
	I0224 12:01:37.729188  895269 main.go:141] libmachine: (addons-641952) Calling .DriverName
	I0224 12:01:37.730296  895269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37325
	I0224 12:01:37.730756  895269 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:01:37.731355  895269 main.go:141] libmachine: Using API Version  1
	I0224 12:01:37.731390  895269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:01:37.731636  895269 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0224 12:01:37.731699  895269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39665
	I0224 12:01:37.731878  895269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44061
	I0224 12:01:37.734229  895269 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:01:37.734270  895269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37563
	I0224 12:01:37.734314  895269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44651
	I0224 12:01:37.734272  895269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37685
	I0224 12:01:37.734696  895269 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:01:37.734919  895269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:01:37.734970  895269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:01:37.734990  895269 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:01:37.735168  895269 main.go:141] libmachine: Using API Version  1
	I0224 12:01:37.735182  895269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:01:37.735574  895269 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:01:37.735685  895269 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:01:37.735757  895269 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:01:37.735772  895269 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0224 12:01:37.735791  895269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0224 12:01:37.735814  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHHostname
	I0224 12:01:37.735877  895269 main.go:141] libmachine: Using API Version  1
	I0224 12:01:37.735930  895269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:01:37.736134  895269 main.go:141] libmachine: Using API Version  1
	I0224 12:01:37.736156  895269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:01:37.736576  895269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:01:37.736614  895269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:01:37.736672  895269 main.go:141] libmachine: Using API Version  1
	I0224 12:01:37.736689  895269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:01:37.737097  895269 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:01:37.737162  895269 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:01:37.737370  895269 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:01:37.737793  895269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:01:37.737839  895269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:01:37.738071  895269 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:01:37.738681  895269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:01:37.738729  895269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:01:37.739019  895269 main.go:141] libmachine: Using API Version  1
	I0224 12:01:37.739045  895269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:01:37.739613  895269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:01:37.739661  895269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:01:37.739724  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:37.739758  895269 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:01:37.740075  895269 main.go:141] libmachine: (addons-641952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:24:05", ip: ""} in network mk-addons-641952: {Iface:virbr1 ExpiryTime:2025-02-24 13:01:02 +0000 UTC Type:0 Mac:52:54:00:01:24:05 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-641952 Clientid:01:52:54:00:01:24:05}
	I0224 12:01:37.740098  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined IP address 192.168.39.150 and MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:37.740346  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHPort
	I0224 12:01:37.740664  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHKeyPath
	I0224 12:01:37.741433  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHUsername
	I0224 12:01:37.741659  895269 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/addons-641952/id_rsa Username:docker}
	I0224 12:01:37.742774  895269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40653
	I0224 12:01:37.743420  895269 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:01:37.744123  895269 main.go:141] libmachine: Using API Version  1
	I0224 12:01:37.744140  895269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:01:37.744575  895269 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:01:37.745523  895269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37681
	I0224 12:01:37.749509  895269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35617
	I0224 12:01:37.750220  895269 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:01:37.750898  895269 main.go:141] libmachine: Using API Version  1
	I0224 12:01:37.750923  895269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:01:37.751491  895269 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:01:37.751732  895269 main.go:141] libmachine: (addons-641952) Calling .GetState
	I0224 12:01:37.753824  895269 main.go:141] libmachine: (addons-641952) Calling .DriverName
	I0224 12:01:37.756007  895269 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.37.0
	I0224 12:01:37.756900  895269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46043
	I0224 12:01:37.757504  895269 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0224 12:01:37.757523  895269 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0224 12:01:37.757549  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHHostname
	I0224 12:01:37.758202  895269 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:01:37.758830  895269 main.go:141] libmachine: Using API Version  1
	I0224 12:01:37.758851  895269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:01:37.759297  895269 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:01:37.759493  895269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33003
	I0224 12:01:37.760001  895269 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:01:37.760101  895269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:01:37.760176  895269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:01:37.760579  895269 main.go:141] libmachine: Using API Version  1
	I0224 12:01:37.760606  895269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:01:37.760985  895269 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:01:37.761471  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:37.761508  895269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44849
	I0224 12:01:37.761690  895269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:01:37.761749  895269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:01:37.761891  895269 main.go:141] libmachine: (addons-641952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:24:05", ip: ""} in network mk-addons-641952: {Iface:virbr1 ExpiryTime:2025-02-24 13:01:02 +0000 UTC Type:0 Mac:52:54:00:01:24:05 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-641952 Clientid:01:52:54:00:01:24:05}
	I0224 12:01:37.761930  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined IP address 192.168.39.150 and MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:37.762113  895269 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:01:37.762212  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHPort
	I0224 12:01:37.762395  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHKeyPath
	I0224 12:01:37.762621  895269 main.go:141] libmachine: Using API Version  1
	I0224 12:01:37.762636  895269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:01:37.762708  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHUsername
	I0224 12:01:37.762851  895269 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/addons-641952/id_rsa Username:docker}
	I0224 12:01:37.763213  895269 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:01:37.763460  895269 main.go:141] libmachine: (addons-641952) Calling .DriverName
	I0224 12:01:37.766134  895269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:01:37.766170  895269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:01:37.766931  895269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:01:37.766976  895269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:01:37.768027  895269 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:01:37.768686  895269 main.go:141] libmachine: Using API Version  1
	I0224 12:01:37.768712  895269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:01:37.769122  895269 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:01:37.769486  895269 main.go:141] libmachine: (addons-641952) Calling .GetState
	I0224 12:01:37.770660  895269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34371
	I0224 12:01:37.771310  895269 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:01:37.771477  895269 main.go:141] libmachine: (addons-641952) Calling .DriverName
	I0224 12:01:37.772126  895269 main.go:141] libmachine: Using API Version  1
	I0224 12:01:37.772145  895269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:01:37.773764  895269 out.go:177]   - Using image docker.io/registry:2.8.3
	I0224 12:01:37.774826  895269 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:01:37.774904  895269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33089
	I0224 12:01:37.775314  895269 main.go:141] libmachine: (addons-641952) Calling .GetState
	I0224 12:01:37.776038  895269 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:01:37.776499  895269 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0224 12:01:37.776611  895269 main.go:141] libmachine: Using API Version  1
	I0224 12:01:37.776642  895269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:01:37.777073  895269 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:01:37.777702  895269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:01:37.777737  895269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:01:37.777985  895269 main.go:141] libmachine: (addons-641952) Calling .DriverName
	I0224 12:01:37.778019  895269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41413
	I0224 12:01:37.778495  895269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34613
	I0224 12:01:37.778622  895269 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0224 12:01:37.778645  895269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0224 12:01:37.778671  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHHostname
	I0224 12:01:37.778679  895269 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:01:37.779060  895269 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:01:37.779659  895269 main.go:141] libmachine: Using API Version  1
	I0224 12:01:37.779696  895269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:01:37.780207  895269 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:01:37.780244  895269 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0224 12:01:37.780439  895269 main.go:141] libmachine: (addons-641952) Calling .GetState
	I0224 12:01:37.781886  895269 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0224 12:01:37.781907  895269 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0224 12:01:37.781938  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHHostname
	I0224 12:01:37.782080  895269 main.go:141] libmachine: Using API Version  1
	I0224 12:01:37.782100  895269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:01:37.782553  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:37.782559  895269 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:01:37.782783  895269 main.go:141] libmachine: (addons-641952) Calling .GetState
	I0224 12:01:37.783130  895269 main.go:141] libmachine: (addons-641952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:24:05", ip: ""} in network mk-addons-641952: {Iface:virbr1 ExpiryTime:2025-02-24 13:01:02 +0000 UTC Type:0 Mac:52:54:00:01:24:05 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-641952 Clientid:01:52:54:00:01:24:05}
	I0224 12:01:37.783262  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined IP address 192.168.39.150 and MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:37.783510  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHPort
	I0224 12:01:37.783779  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHKeyPath
	I0224 12:01:37.784032  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHUsername
	I0224 12:01:37.784443  895269 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/addons-641952/id_rsa Username:docker}
	I0224 12:01:37.785663  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:37.786308  895269 main.go:141] libmachine: (addons-641952) Calling .DriverName
	I0224 12:01:37.786721  895269 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0224 12:01:37.786737  895269 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0224 12:01:37.786786  895269 main.go:141] libmachine: (addons-641952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:24:05", ip: ""} in network mk-addons-641952: {Iface:virbr1 ExpiryTime:2025-02-24 13:01:02 +0000 UTC Type:0 Mac:52:54:00:01:24:05 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-641952 Clientid:01:52:54:00:01:24:05}
	I0224 12:01:37.786807  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined IP address 192.168.39.150 and MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:37.786825  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHPort
	I0224 12:01:37.786868  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHHostname
	I0224 12:01:37.787669  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHKeyPath
	I0224 12:01:37.787996  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHUsername
	I0224 12:01:37.788084  895269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44713
	I0224 12:01:37.788284  895269 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/addons-641952/id_rsa Username:docker}
	I0224 12:01:37.788607  895269 main.go:141] libmachine: (addons-641952) Calling .DriverName
	I0224 12:01:37.789358  895269 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:01:37.790008  895269 main.go:141] libmachine: Using API Version  1
	I0224 12:01:37.790029  895269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:01:37.790447  895269 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:01:37.790586  895269 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0224 12:01:37.790753  895269 main.go:141] libmachine: (addons-641952) Calling .GetState
	I0224 12:01:37.790808  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:37.791351  895269 main.go:141] libmachine: (addons-641952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:24:05", ip: ""} in network mk-addons-641952: {Iface:virbr1 ExpiryTime:2025-02-24 13:01:02 +0000 UTC Type:0 Mac:52:54:00:01:24:05 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-641952 Clientid:01:52:54:00:01:24:05}
	I0224 12:01:37.791382  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined IP address 192.168.39.150 and MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:37.791569  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHPort
	I0224 12:01:37.791757  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHKeyPath
	I0224 12:01:37.791948  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHUsername
	I0224 12:01:37.792139  895269 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/addons-641952/id_rsa Username:docker}
	I0224 12:01:37.792151  895269 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0224 12:01:37.792176  895269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0224 12:01:37.792196  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHHostname
	I0224 12:01:37.792554  895269 main.go:141] libmachine: (addons-641952) Calling .DriverName
	I0224 12:01:37.794612  895269 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0224 12:01:37.795191  895269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34343
	I0224 12:01:37.795693  895269 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:01:37.795808  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:37.796014  895269 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0224 12:01:37.796036  895269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0224 12:01:37.796063  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHHostname
	I0224 12:01:37.796400  895269 main.go:141] libmachine: Using API Version  1
	I0224 12:01:37.796417  895269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:01:37.796746  895269 main.go:141] libmachine: (addons-641952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:24:05", ip: ""} in network mk-addons-641952: {Iface:virbr1 ExpiryTime:2025-02-24 13:01:02 +0000 UTC Type:0 Mac:52:54:00:01:24:05 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-641952 Clientid:01:52:54:00:01:24:05}
	I0224 12:01:37.796775  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined IP address 192.168.39.150 and MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:37.797364  895269 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:01:37.797619  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHPort
	I0224 12:01:37.797796  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHKeyPath
	I0224 12:01:37.797984  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHUsername
	I0224 12:01:37.798184  895269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39949
	I0224 12:01:37.798325  895269 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/addons-641952/id_rsa Username:docker}
	I0224 12:01:37.798846  895269 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:01:37.799093  895269 main.go:141] libmachine: (addons-641952) Calling .GetState
	I0224 12:01:37.799441  895269 main.go:141] libmachine: Using API Version  1
	I0224 12:01:37.799461  895269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:01:37.799524  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:37.799876  895269 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:01:37.800509  895269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:01:37.800554  895269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:01:37.800816  895269 main.go:141] libmachine: (addons-641952) Calling .DriverName
	I0224 12:01:37.801440  895269 main.go:141] libmachine: (addons-641952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:24:05", ip: ""} in network mk-addons-641952: {Iface:virbr1 ExpiryTime:2025-02-24 13:01:02 +0000 UTC Type:0 Mac:52:54:00:01:24:05 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-641952 Clientid:01:52:54:00:01:24:05}
	I0224 12:01:37.801462  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined IP address 192.168.39.150 and MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:37.801525  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHPort
	I0224 12:01:37.802081  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHKeyPath
	I0224 12:01:37.802278  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHUsername
	I0224 12:01:37.802442  895269 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/addons-641952/id_rsa Username:docker}
	I0224 12:01:37.803114  895269 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.29
	I0224 12:01:37.804674  895269 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0224 12:01:37.804693  895269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0224 12:01:37.804714  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHHostname
	I0224 12:01:37.808032  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:37.808488  895269 main.go:141] libmachine: (addons-641952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:24:05", ip: ""} in network mk-addons-641952: {Iface:virbr1 ExpiryTime:2025-02-24 13:01:02 +0000 UTC Type:0 Mac:52:54:00:01:24:05 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-641952 Clientid:01:52:54:00:01:24:05}
	I0224 12:01:37.808517  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined IP address 192.168.39.150 and MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:37.808816  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHPort
	I0224 12:01:37.809010  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHKeyPath
	I0224 12:01:37.809152  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHUsername
	I0224 12:01:37.809255  895269 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/addons-641952/id_rsa Username:docker}
	I0224 12:01:37.810509  895269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37683
	I0224 12:01:37.810632  895269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33117
	I0224 12:01:37.811034  895269 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:01:37.811162  895269 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:01:37.811623  895269 main.go:141] libmachine: Using API Version  1
	I0224 12:01:37.811640  895269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:01:37.811774  895269 main.go:141] libmachine: Using API Version  1
	I0224 12:01:37.811800  895269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:01:37.812054  895269 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:01:37.812108  895269 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:01:37.812214  895269 main.go:141] libmachine: (addons-641952) Calling .GetState
	I0224 12:01:37.812323  895269 main.go:141] libmachine: (addons-641952) Calling .GetState
	I0224 12:01:37.813959  895269 main.go:141] libmachine: (addons-641952) Calling .DriverName
	I0224 12:01:37.814304  895269 main.go:141] libmachine: Making call to close driver server
	I0224 12:01:37.814318  895269 main.go:141] libmachine: (addons-641952) Calling .Close
	I0224 12:01:37.814377  895269 main.go:141] libmachine: (addons-641952) Calling .DriverName
	I0224 12:01:37.816365  895269 main.go:141] libmachine: (addons-641952) DBG | Closing plugin on server side
	I0224 12:01:37.816390  895269 main.go:141] libmachine: Successfully made call to close driver server
	I0224 12:01:37.816396  895269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 12:01:37.816401  895269 main.go:141] libmachine: Making call to close driver server
	I0224 12:01:37.816405  895269 main.go:141] libmachine: (addons-641952) Calling .Close
	I0224 12:01:37.816570  895269 main.go:141] libmachine: Successfully made call to close driver server
	I0224 12:01:37.816578  895269 main.go:141] libmachine: Making call to close connection to plugin binary
	W0224 12:01:37.816671  895269 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0224 12:01:37.817836  895269 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0224 12:01:37.819133  895269 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0224 12:01:37.819158  895269 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0224 12:01:37.819184  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHHostname
	I0224 12:01:37.820212  895269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45775
	I0224 12:01:37.820685  895269 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:01:37.821404  895269 main.go:141] libmachine: Using API Version  1
	I0224 12:01:37.821420  895269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:01:37.821903  895269 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:01:37.822109  895269 main.go:141] libmachine: (addons-641952) Calling .GetState
	I0224 12:01:37.822544  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:37.822746  895269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36051
	I0224 12:01:37.823351  895269 main.go:141] libmachine: (addons-641952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:24:05", ip: ""} in network mk-addons-641952: {Iface:virbr1 ExpiryTime:2025-02-24 13:01:02 +0000 UTC Type:0 Mac:52:54:00:01:24:05 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-641952 Clientid:01:52:54:00:01:24:05}
	I0224 12:01:37.823372  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined IP address 192.168.39.150 and MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:37.823750  895269 main.go:141] libmachine: (addons-641952) Calling .DriverName
	I0224 12:01:37.823819  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHPort
	I0224 12:01:37.823960  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHKeyPath
	I0224 12:01:37.824141  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHUsername
	I0224 12:01:37.824322  895269 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/addons-641952/id_rsa Username:docker}
	I0224 12:01:37.824579  895269 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:01:37.825502  895269 main.go:141] libmachine: Using API Version  1
	I0224 12:01:37.825516  895269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:01:37.825565  895269 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0224 12:01:37.826210  895269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34885
	I0224 12:01:37.826376  895269 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:01:37.826558  895269 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:01:37.826755  895269 main.go:141] libmachine: (addons-641952) Calling .GetState
	I0224 12:01:37.827080  895269 main.go:141] libmachine: Using API Version  1
	I0224 12:01:37.827096  895269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:01:37.827400  895269 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:01:37.827721  895269 main.go:141] libmachine: (addons-641952) Calling .GetState
	I0224 12:01:37.828484  895269 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0224 12:01:37.828741  895269 main.go:141] libmachine: (addons-641952) Calling .DriverName
	I0224 12:01:37.829668  895269 main.go:141] libmachine: (addons-641952) Calling .DriverName
	I0224 12:01:37.830564  895269 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0224 12:01:37.830825  895269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33375
	I0224 12:01:37.831452  895269 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0224 12:01:37.831527  895269 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:01:37.831469  895269 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0224 12:01:37.832245  895269 main.go:141] libmachine: Using API Version  1
	I0224 12:01:37.832263  895269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:01:37.832847  895269 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:01:37.833153  895269 main.go:141] libmachine: (addons-641952) Calling .GetState
	I0224 12:01:37.833750  895269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42543
	I0224 12:01:37.834103  895269 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:01:37.834173  895269 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0224 12:01:37.834297  895269 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0224 12:01:37.834176  895269 out.go:177]   - Using image docker.io/busybox:stable
	I0224 12:01:37.834942  895269 main.go:141] libmachine: Using API Version  1
	I0224 12:01:37.834963  895269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:01:37.835108  895269 main.go:141] libmachine: (addons-641952) Calling .DriverName
	I0224 12:01:37.835331  895269 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:01:37.835547  895269 main.go:141] libmachine: (addons-641952) Calling .GetState
	I0224 12:01:37.836073  895269 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0224 12:01:37.836338  895269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0224 12:01:37.836364  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHHostname
	I0224 12:01:37.836781  895269 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0224 12:01:37.837582  895269 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0224 12:01:37.837603  895269 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0224 12:01:37.838365  895269 main.go:141] libmachine: (addons-641952) Calling .DriverName
	I0224 12:01:37.838415  895269 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0224 12:01:37.838449  895269 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0224 12:01:37.838477  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHHostname
	I0224 12:01:37.839715  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:37.840048  895269 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0224 12:01:37.840069  895269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0224 12:01:37.840111  895269 main.go:141] libmachine: (addons-641952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:24:05", ip: ""} in network mk-addons-641952: {Iface:virbr1 ExpiryTime:2025-02-24 13:01:02 +0000 UTC Type:0 Mac:52:54:00:01:24:05 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-641952 Clientid:01:52:54:00:01:24:05}
	I0224 12:01:37.840141  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHHostname
	I0224 12:01:37.840142  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined IP address 192.168.39.150 and MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:37.840297  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHPort
	I0224 12:01:37.840382  895269 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0224 12:01:37.840484  895269 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0224 12:01:37.840502  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHKeyPath
	I0224 12:01:37.840672  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHUsername
	I0224 12:01:37.840798  895269 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/addons-641952/id_rsa Username:docker}
	I0224 12:01:37.841941  895269 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0224 12:01:37.841963  895269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0224 12:01:37.841981  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHHostname
	I0224 12:01:37.842339  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:37.843001  895269 main.go:141] libmachine: (addons-641952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:24:05", ip: ""} in network mk-addons-641952: {Iface:virbr1 ExpiryTime:2025-02-24 13:01:02 +0000 UTC Type:0 Mac:52:54:00:01:24:05 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-641952 Clientid:01:52:54:00:01:24:05}
	I0224 12:01:37.843033  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined IP address 192.168.39.150 and MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:37.843092  895269 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0224 12:01:37.843430  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHPort
	I0224 12:01:37.843643  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHKeyPath
	I0224 12:01:37.843773  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHUsername
	I0224 12:01:37.843896  895269 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/addons-641952/id_rsa Username:docker}
	W0224 12:01:37.844677  895269 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0224 12:01:37.844705  895269 retry.go:31] will retry after 333.621576ms: ssh: handshake failed: EOF
	I0224 12:01:37.844756  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:37.844943  895269 main.go:141] libmachine: (addons-641952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:24:05", ip: ""} in network mk-addons-641952: {Iface:virbr1 ExpiryTime:2025-02-24 13:01:02 +0000 UTC Type:0 Mac:52:54:00:01:24:05 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-641952 Clientid:01:52:54:00:01:24:05}
	I0224 12:01:37.844960  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined IP address 192.168.39.150 and MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:37.845153  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHPort
	I0224 12:01:37.845341  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHKeyPath
	I0224 12:01:37.845518  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHUsername
	I0224 12:01:37.845591  895269 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0224 12:01:37.845676  895269 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/addons-641952/id_rsa Username:docker}
	I0224 12:01:37.846171  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:37.846644  895269 main.go:141] libmachine: (addons-641952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:24:05", ip: ""} in network mk-addons-641952: {Iface:virbr1 ExpiryTime:2025-02-24 13:01:02 +0000 UTC Type:0 Mac:52:54:00:01:24:05 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-641952 Clientid:01:52:54:00:01:24:05}
	I0224 12:01:37.846658  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined IP address 192.168.39.150 and MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:37.846872  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHPort
	I0224 12:01:37.846878  895269 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0224 12:01:37.846891  895269 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0224 12:01:37.846906  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHHostname
	I0224 12:01:37.847577  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHKeyPath
	I0224 12:01:37.847758  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHUsername
	I0224 12:01:37.847874  895269 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/addons-641952/id_rsa Username:docker}
	I0224 12:01:37.850426  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:37.850877  895269 main.go:141] libmachine: (addons-641952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:24:05", ip: ""} in network mk-addons-641952: {Iface:virbr1 ExpiryTime:2025-02-24 13:01:02 +0000 UTC Type:0 Mac:52:54:00:01:24:05 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-641952 Clientid:01:52:54:00:01:24:05}
	I0224 12:01:37.850957  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined IP address 192.168.39.150 and MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:37.851110  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHPort
	I0224 12:01:37.851294  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHKeyPath
	I0224 12:01:37.851462  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHUsername
	I0224 12:01:37.851593  895269 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/addons-641952/id_rsa Username:docker}
	I0224 12:01:38.317725  895269 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0224 12:01:38.317761  895269 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0224 12:01:38.322376  895269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0224 12:01:38.348292  895269 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0224 12:01:38.348329  895269 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0224 12:01:38.350787  895269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0224 12:01:38.352119  895269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0224 12:01:38.374085  895269 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0224 12:01:38.374123  895269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0224 12:01:38.377993  895269 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0224 12:01:38.378021  895269 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0224 12:01:38.386552  895269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0224 12:01:38.403705  895269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0224 12:01:38.424339  895269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0224 12:01:38.462455  895269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0224 12:01:38.472363  895269 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0224 12:01:38.472400  895269 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0224 12:01:38.481361  895269 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0224 12:01:38.481394  895269 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0224 12:01:38.507199  895269 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0224 12:01:38.507238  895269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0224 12:01:38.532670  895269 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0224 12:01:38.532699  895269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0224 12:01:38.540227  895269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0224 12:01:38.566790  895269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0224 12:01:38.581692  895269 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0224 12:01:38.581731  895269 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0224 12:01:38.688160  895269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0224 12:01:38.699098  895269 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0224 12:01:38.699131  895269 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0224 12:01:38.716983  895269 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0224 12:01:38.717016  895269 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0224 12:01:38.722942  895269 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0224 12:01:38.722973  895269 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0224 12:01:38.725504  895269 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.064432778s)
	I0224 12:01:38.725611  895269 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0224 12:01:38.725610  895269 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.071254376s)
	I0224 12:01:38.725763  895269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0224 12:01:38.902672  895269 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0224 12:01:38.902699  895269 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0224 12:01:38.911650  895269 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0224 12:01:38.911673  895269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0224 12:01:38.924465  895269 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0224 12:01:38.924505  895269 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0224 12:01:39.024453  895269 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0224 12:01:39.024494  895269 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0224 12:01:39.115540  895269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0224 12:01:39.131143  895269 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0224 12:01:39.131181  895269 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0224 12:01:39.190094  895269 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0224 12:01:39.190140  895269 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0224 12:01:39.252965  895269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0224 12:01:39.354129  895269 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0224 12:01:39.354160  895269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0224 12:01:39.436660  895269 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0224 12:01:39.436693  895269 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0224 12:01:39.722719  895269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0224 12:01:39.823744  895269 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0224 12:01:39.823769  895269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0224 12:01:40.230293  895269 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0224 12:01:40.230324  895269 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0224 12:01:40.902105  895269 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0224 12:01:40.902133  895269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0224 12:01:41.109941  895269 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0224 12:01:41.109967  895269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0224 12:01:41.412371  895269 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0224 12:01:41.412417  895269 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0224 12:01:41.760119  895269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0224 12:01:44.586279  895269 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0224 12:01:44.586339  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHHostname
	I0224 12:01:44.590567  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:44.591114  895269 main.go:141] libmachine: (addons-641952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:24:05", ip: ""} in network mk-addons-641952: {Iface:virbr1 ExpiryTime:2025-02-24 13:01:02 +0000 UTC Type:0 Mac:52:54:00:01:24:05 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-641952 Clientid:01:52:54:00:01:24:05}
	I0224 12:01:44.591150  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined IP address 192.168.39.150 and MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:44.591418  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHPort
	I0224 12:01:44.591668  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHKeyPath
	I0224 12:01:44.591936  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHUsername
	I0224 12:01:44.592202  895269 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/addons-641952/id_rsa Username:docker}
	I0224 12:01:45.068012  895269 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0224 12:01:45.266218  895269 addons.go:238] Setting addon gcp-auth=true in "addons-641952"
	I0224 12:01:45.266342  895269 host.go:66] Checking if "addons-641952" exists ...
	I0224 12:01:45.266806  895269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:01:45.266849  895269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:01:45.282864  895269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41465
	I0224 12:01:45.283319  895269 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:01:45.283919  895269 main.go:141] libmachine: Using API Version  1
	I0224 12:01:45.283942  895269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:01:45.284335  895269 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:01:45.284847  895269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:01:45.284881  895269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:01:45.301940  895269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37801
	I0224 12:01:45.302487  895269 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:01:45.303040  895269 main.go:141] libmachine: Using API Version  1
	I0224 12:01:45.303074  895269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:01:45.303484  895269 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:01:45.303784  895269 main.go:141] libmachine: (addons-641952) Calling .GetState
	I0224 12:01:45.305789  895269 main.go:141] libmachine: (addons-641952) Calling .DriverName
	I0224 12:01:45.306042  895269 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0224 12:01:45.306067  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHHostname
	I0224 12:01:45.308958  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:45.309502  895269 main.go:141] libmachine: (addons-641952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:24:05", ip: ""} in network mk-addons-641952: {Iface:virbr1 ExpiryTime:2025-02-24 13:01:02 +0000 UTC Type:0 Mac:52:54:00:01:24:05 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-641952 Clientid:01:52:54:00:01:24:05}
	I0224 12:01:45.309536  895269 main.go:141] libmachine: (addons-641952) DBG | domain addons-641952 has defined IP address 192.168.39.150 and MAC address 52:54:00:01:24:05 in network mk-addons-641952
	I0224 12:01:45.309695  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHPort
	I0224 12:01:45.309987  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHKeyPath
	I0224 12:01:45.310178  895269 main.go:141] libmachine: (addons-641952) Calling .GetSSHUsername
	I0224 12:01:45.310366  895269 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/addons-641952/id_rsa Username:docker}
	I0224 12:01:46.986294  895269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.66387847s)
	I0224 12:01:46.986361  895269 main.go:141] libmachine: Making call to close driver server
	I0224 12:01:46.986372  895269 main.go:141] libmachine: (addons-641952) Calling .Close
	I0224 12:01:46.986366  895269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (8.635540593s)
	I0224 12:01:46.986419  895269 main.go:141] libmachine: Making call to close driver server
	I0224 12:01:46.986435  895269 main.go:141] libmachine: (addons-641952) Calling .Close
	I0224 12:01:46.986474  895269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.634328963s)
	I0224 12:01:46.986508  895269 main.go:141] libmachine: Making call to close driver server
	I0224 12:01:46.986519  895269 main.go:141] libmachine: (addons-641952) Calling .Close
	I0224 12:01:46.986589  895269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.599997935s)
	I0224 12:01:46.986626  895269 main.go:141] libmachine: Making call to close driver server
	I0224 12:01:46.986636  895269 main.go:141] libmachine: (addons-641952) Calling .Close
	I0224 12:01:46.986714  895269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.582976967s)
	I0224 12:01:46.986731  895269 main.go:141] libmachine: Making call to close driver server
	I0224 12:01:46.986739  895269 main.go:141] libmachine: (addons-641952) Calling .Close
	I0224 12:01:46.986825  895269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.56245857s)
	I0224 12:01:46.986851  895269 main.go:141] libmachine: Making call to close driver server
	I0224 12:01:46.986858  895269 main.go:141] libmachine: (addons-641952) Calling .Close
	I0224 12:01:46.986857  895269 main.go:141] libmachine: Successfully made call to close driver server
	I0224 12:01:46.986871  895269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 12:01:46.986880  895269 main.go:141] libmachine: Making call to close driver server
	I0224 12:01:46.986880  895269 main.go:141] libmachine: (addons-641952) DBG | Closing plugin on server side
	I0224 12:01:46.986888  895269 main.go:141] libmachine: (addons-641952) Calling .Close
	I0224 12:01:46.986908  895269 main.go:141] libmachine: Successfully made call to close driver server
	I0224 12:01:46.986916  895269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 12:01:46.986923  895269 main.go:141] libmachine: Making call to close driver server
	I0224 12:01:46.986929  895269 main.go:141] libmachine: (addons-641952) Calling .Close
	I0224 12:01:46.986952  895269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.52447061s)
	I0224 12:01:46.986968  895269 main.go:141] libmachine: Making call to close driver server
	I0224 12:01:46.986976  895269 main.go:141] libmachine: (addons-641952) Calling .Close
	I0224 12:01:46.987010  895269 main.go:141] libmachine: Successfully made call to close driver server
	I0224 12:01:46.987020  895269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 12:01:46.987031  895269 main.go:141] libmachine: Making call to close driver server
	I0224 12:01:46.987038  895269 main.go:141] libmachine: (addons-641952) Calling .Close
	I0224 12:01:46.987040  895269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.446790187s)
	I0224 12:01:46.987055  895269 main.go:141] libmachine: Making call to close driver server
	I0224 12:01:46.987064  895269 main.go:141] libmachine: (addons-641952) Calling .Close
	I0224 12:01:46.987132  895269 main.go:141] libmachine: (addons-641952) DBG | Closing plugin on server side
	I0224 12:01:46.987157  895269 main.go:141] libmachine: Successfully made call to close driver server
	I0224 12:01:46.987164  895269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 12:01:46.987167  895269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (8.420347968s)
	I0224 12:01:46.987171  895269 main.go:141] libmachine: Making call to close driver server
	I0224 12:01:46.987177  895269 main.go:141] libmachine: (addons-641952) Calling .Close
	I0224 12:01:46.987193  895269 main.go:141] libmachine: Making call to close driver server
	I0224 12:01:46.987202  895269 main.go:141] libmachine: (addons-641952) Calling .Close
	I0224 12:01:46.987265  895269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.299071362s)
	I0224 12:01:46.987284  895269 main.go:141] libmachine: Making call to close driver server
	I0224 12:01:46.987294  895269 main.go:141] libmachine: (addons-641952) Calling .Close
	I0224 12:01:46.987336  895269 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.261710546s)
	I0224 12:01:46.987534  895269 main.go:141] libmachine: (addons-641952) DBG | Closing plugin on server side
	I0224 12:01:46.987570  895269 main.go:141] libmachine: Successfully made call to close driver server
	I0224 12:01:46.987580  895269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 12:01:46.987591  895269 addons.go:479] Verifying addon ingress=true in "addons-641952"
	I0224 12:01:46.988330  895269 node_ready.go:35] waiting up to 6m0s for node "addons-641952" to be "Ready" ...
	I0224 12:01:46.988568  895269 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.262785979s)
	I0224 12:01:46.988589  895269 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0224 12:01:46.989470  895269 main.go:141] libmachine: (addons-641952) DBG | Closing plugin on server side
	I0224 12:01:46.989503  895269 main.go:141] libmachine: Successfully made call to close driver server
	I0224 12:01:46.989510  895269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 12:01:46.989517  895269 main.go:141] libmachine: Making call to close driver server
	I0224 12:01:46.989524  895269 main.go:141] libmachine: (addons-641952) Calling .Close
	I0224 12:01:46.989600  895269 main.go:141] libmachine: (addons-641952) DBG | Closing plugin on server side
	I0224 12:01:46.989619  895269 main.go:141] libmachine: Successfully made call to close driver server
	I0224 12:01:46.989629  895269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 12:01:46.989636  895269 main.go:141] libmachine: Making call to close driver server
	I0224 12:01:46.989642  895269 main.go:141] libmachine: (addons-641952) Calling .Close
	I0224 12:01:46.989747  895269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.874170955s)
	I0224 12:01:46.989776  895269 main.go:141] libmachine: Making call to close driver server
	I0224 12:01:46.989788  895269 main.go:141] libmachine: (addons-641952) Calling .Close
	I0224 12:01:46.989905  895269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.736896415s)
	I0224 12:01:46.989924  895269 main.go:141] libmachine: Making call to close driver server
	I0224 12:01:46.989933  895269 main.go:141] libmachine: (addons-641952) Calling .Close
	I0224 12:01:46.990059  895269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.267308548s)
	W0224 12:01:46.990082  895269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0224 12:01:46.990103  895269 retry.go:31] will retry after 257.807928ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0224 12:01:46.990148  895269 main.go:141] libmachine: (addons-641952) DBG | Closing plugin on server side
	I0224 12:01:46.990185  895269 main.go:141] libmachine: Successfully made call to close driver server
	I0224 12:01:46.990192  895269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 12:01:46.990200  895269 main.go:141] libmachine: Making call to close driver server
	I0224 12:01:46.990206  895269 main.go:141] libmachine: (addons-641952) Calling .Close
	I0224 12:01:46.990264  895269 main.go:141] libmachine: (addons-641952) DBG | Closing plugin on server side
	I0224 12:01:46.990288  895269 main.go:141] libmachine: Successfully made call to close driver server
	I0224 12:01:46.990294  895269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 12:01:46.990302  895269 main.go:141] libmachine: Making call to close driver server
	I0224 12:01:46.990308  895269 main.go:141] libmachine: (addons-641952) Calling .Close
	I0224 12:01:46.990352  895269 main.go:141] libmachine: (addons-641952) DBG | Closing plugin on server side
	I0224 12:01:46.990370  895269 main.go:141] libmachine: Successfully made call to close driver server
	I0224 12:01:46.990375  895269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 12:01:46.990517  895269 main.go:141] libmachine: (addons-641952) DBG | Closing plugin on server side
	I0224 12:01:46.990540  895269 main.go:141] libmachine: Successfully made call to close driver server
	I0224 12:01:46.990545  895269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 12:01:46.990554  895269 addons.go:479] Verifying addon registry=true in "addons-641952"
	I0224 12:01:46.990886  895269 main.go:141] libmachine: Successfully made call to close driver server
	I0224 12:01:46.990898  895269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 12:01:46.990907  895269 main.go:141] libmachine: Making call to close driver server
	I0224 12:01:46.990914  895269 main.go:141] libmachine: (addons-641952) Calling .Close
	I0224 12:01:46.990997  895269 main.go:141] libmachine: (addons-641952) DBG | Closing plugin on server side
	I0224 12:01:46.991055  895269 main.go:141] libmachine: Successfully made call to close driver server
	I0224 12:01:46.991078  895269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 12:01:46.991158  895269 main.go:141] libmachine: (addons-641952) DBG | Closing plugin on server side
	I0224 12:01:46.991217  895269 main.go:141] libmachine: Successfully made call to close driver server
	I0224 12:01:46.991230  895269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 12:01:46.992503  895269 out.go:177] * Verifying registry addon...
	I0224 12:01:46.992592  895269 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-641952 service yakd-dashboard -n yakd-dashboard
	
	I0224 12:01:46.993176  895269 main.go:141] libmachine: Successfully made call to close driver server
	I0224 12:01:46.993224  895269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 12:01:46.993241  895269 addons.go:479] Verifying addon metrics-server=true in "addons-641952"
	I0224 12:01:46.993368  895269 main.go:141] libmachine: Successfully made call to close driver server
	I0224 12:01:46.993382  895269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 12:01:46.993390  895269 main.go:141] libmachine: Making call to close driver server
	I0224 12:01:46.993398  895269 main.go:141] libmachine: (addons-641952) Calling .Close
	I0224 12:01:46.993753  895269 main.go:141] libmachine: (addons-641952) DBG | Closing plugin on server side
	I0224 12:01:46.993796  895269 main.go:141] libmachine: Successfully made call to close driver server
	I0224 12:01:46.993807  895269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 12:01:46.994024  895269 out.go:177] * Verifying ingress addon...
	I0224 12:01:46.994262  895269 main.go:141] libmachine: Successfully made call to close driver server
	I0224 12:01:46.994282  895269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 12:01:46.994350  895269 main.go:141] libmachine: (addons-641952) DBG | Closing plugin on server side
	I0224 12:01:46.994447  895269 main.go:141] libmachine: Successfully made call to close driver server
	I0224 12:01:46.994460  895269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 12:01:46.994536  895269 main.go:141] libmachine: (addons-641952) DBG | Closing plugin on server side
	I0224 12:01:46.994581  895269 main.go:141] libmachine: Successfully made call to close driver server
	I0224 12:01:46.994593  895269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 12:01:46.994602  895269 main.go:141] libmachine: Making call to close driver server
	I0224 12:01:46.994614  895269 main.go:141] libmachine: (addons-641952) Calling .Close
	I0224 12:01:46.994824  895269 main.go:141] libmachine: (addons-641952) DBG | Closing plugin on server side
	I0224 12:01:46.994873  895269 main.go:141] libmachine: Successfully made call to close driver server
	I0224 12:01:46.994886  895269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 12:01:46.994900  895269 main.go:141] libmachine: Making call to close driver server
	I0224 12:01:46.994912  895269 main.go:141] libmachine: (addons-641952) Calling .Close
	I0224 12:01:46.995174  895269 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0224 12:01:46.995268  895269 main.go:141] libmachine: (addons-641952) DBG | Closing plugin on server side
	I0224 12:01:46.995323  895269 main.go:141] libmachine: Successfully made call to close driver server
	I0224 12:01:46.995350  895269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 12:01:46.995892  895269 main.go:141] libmachine: (addons-641952) DBG | Closing plugin on server side
	I0224 12:01:46.995926  895269 main.go:141] libmachine: Successfully made call to close driver server
	I0224 12:01:46.995934  895269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 12:01:46.996381  895269 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0224 12:01:46.998598  895269 main.go:141] libmachine: Successfully made call to close driver server
	I0224 12:01:46.998615  895269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 12:01:47.016379  895269 node_ready.go:49] node "addons-641952" has status "Ready":"True"
	I0224 12:01:47.016416  895269 node_ready.go:38] duration metric: took 28.054323ms for node "addons-641952" to be "Ready" ...
	I0224 12:01:47.016429  895269 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 12:01:47.088450  895269 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0224 12:01:47.088487  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:01:47.088536  895269 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0224 12:01:47.088563  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:01:47.088851  895269 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-nfjdc" in "kube-system" namespace to be "Ready" ...
	I0224 12:01:47.144055  895269 main.go:141] libmachine: Making call to close driver server
	I0224 12:01:47.144084  895269 main.go:141] libmachine: (addons-641952) Calling .Close
	I0224 12:01:47.144541  895269 main.go:141] libmachine: Successfully made call to close driver server
	I0224 12:01:47.144570  895269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 12:01:47.144541  895269 main.go:141] libmachine: (addons-641952) DBG | Closing plugin on server side
	W0224 12:01:47.144679  895269 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0224 12:01:47.184074  895269 main.go:141] libmachine: Making call to close driver server
	I0224 12:01:47.184115  895269 main.go:141] libmachine: (addons-641952) Calling .Close
	I0224 12:01:47.184521  895269 main.go:141] libmachine: Successfully made call to close driver server
	I0224 12:01:47.184542  895269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 12:01:47.248745  895269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0224 12:01:47.493089  895269 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-641952" context rescaled to 1 replicas
	I0224 12:01:47.500285  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:01:47.500377  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:01:48.001862  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:01:48.001863  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:01:48.499415  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:01:48.499511  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:01:49.006423  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:01:49.066850  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:01:49.124307  895269 pod_ready.go:103] pod "amd-gpu-device-plugin-nfjdc" in "kube-system" namespace has status "Ready":"False"
	I0224 12:01:49.543365  895269 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.237287381s)
	I0224 12:01:49.543514  895269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.294714828s)
	I0224 12:01:49.543696  895269 main.go:141] libmachine: Making call to close driver server
	I0224 12:01:49.543723  895269 main.go:141] libmachine: (addons-641952) Calling .Close
	I0224 12:01:49.544048  895269 main.go:141] libmachine: (addons-641952) DBG | Closing plugin on server side
	I0224 12:01:49.544101  895269 main.go:141] libmachine: Successfully made call to close driver server
	I0224 12:01:49.544117  895269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 12:01:49.544127  895269 main.go:141] libmachine: Making call to close driver server
	I0224 12:01:49.544139  895269 main.go:141] libmachine: (addons-641952) Calling .Close
	I0224 12:01:49.544395  895269 main.go:141] libmachine: Successfully made call to close driver server
	I0224 12:01:49.544412  895269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 12:01:49.544441  895269 main.go:141] libmachine: (addons-641952) DBG | Closing plugin on server side
	I0224 12:01:49.545583  895269 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0224 12:01:49.545771  895269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.785585723s)
	I0224 12:01:49.545814  895269 main.go:141] libmachine: Making call to close driver server
	I0224 12:01:49.545834  895269 main.go:141] libmachine: (addons-641952) Calling .Close
	I0224 12:01:49.546074  895269 main.go:141] libmachine: (addons-641952) DBG | Closing plugin on server side
	I0224 12:01:49.546105  895269 main.go:141] libmachine: Successfully made call to close driver server
	I0224 12:01:49.546111  895269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 12:01:49.546119  895269 main.go:141] libmachine: Making call to close driver server
	I0224 12:01:49.546125  895269 main.go:141] libmachine: (addons-641952) Calling .Close
	I0224 12:01:49.546364  895269 main.go:141] libmachine: (addons-641952) DBG | Closing plugin on server side
	I0224 12:01:49.546400  895269 main.go:141] libmachine: Successfully made call to close driver server
	I0224 12:01:49.546412  895269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 12:01:49.546423  895269 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-641952"
	I0224 12:01:49.548953  895269 out.go:177] * Verifying csi-hostpath-driver addon...
	I0224 12:01:49.550330  895269 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0224 12:01:49.551383  895269 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0224 12:01:49.551797  895269 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0224 12:01:49.551817  895269 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0224 12:01:49.584546  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:01:49.586423  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:01:49.600594  895269 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0224 12:01:49.600626  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:01:49.707516  895269 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0224 12:01:49.707548  895269 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0224 12:01:49.805180  895269 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0224 12:01:49.805222  895269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0224 12:01:49.866203  895269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0224 12:01:49.999147  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:01:50.002579  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:01:50.054799  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:01:50.500101  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:01:50.500349  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:01:50.555841  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:01:51.028168  895269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.16187824s)
	I0224 12:01:51.028256  895269 main.go:141] libmachine: Making call to close driver server
	I0224 12:01:51.028281  895269 main.go:141] libmachine: (addons-641952) Calling .Close
	I0224 12:01:51.028593  895269 main.go:141] libmachine: Successfully made call to close driver server
	I0224 12:01:51.028614  895269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 12:01:51.028639  895269 main.go:141] libmachine: (addons-641952) DBG | Closing plugin on server side
	I0224 12:01:51.028673  895269 main.go:141] libmachine: Making call to close driver server
	I0224 12:01:51.028688  895269 main.go:141] libmachine: (addons-641952) Calling .Close
	I0224 12:01:51.029010  895269 main.go:141] libmachine: Successfully made call to close driver server
	I0224 12:01:51.029057  895269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 12:01:51.031341  895269 addons.go:479] Verifying addon gcp-auth=true in "addons-641952"
	I0224 12:01:51.033343  895269 out.go:177] * Verifying gcp-auth addon...
	I0224 12:01:51.035888  895269 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0224 12:01:51.045646  895269 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0224 12:01:51.045658  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:01:51.045672  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:01:51.045765  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:01:51.070232  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:01:51.125766  895269 pod_ready.go:103] pod "amd-gpu-device-plugin-nfjdc" in "kube-system" namespace has status "Ready":"False"
	I0224 12:01:51.499287  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:01:51.499885  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:01:51.539689  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:01:51.554732  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:01:52.001198  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:01:52.001454  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:01:52.039275  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:01:52.055595  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:01:52.504309  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:01:52.504424  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:01:52.539400  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:01:52.557217  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:01:53.001158  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:01:53.001204  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:01:53.038808  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:01:53.057291  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:01:53.500882  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:01:53.501665  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:01:53.540291  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:01:53.555301  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:01:53.594099  895269 pod_ready.go:103] pod "amd-gpu-device-plugin-nfjdc" in "kube-system" namespace has status "Ready":"False"
	I0224 12:01:54.000176  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:01:54.000404  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:01:54.039847  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:01:54.055610  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:01:54.499948  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:01:54.500385  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:01:54.539286  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:01:54.555876  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:01:55.009607  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:01:55.009748  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:01:55.039872  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:01:55.055077  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:01:55.500125  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:01:55.500242  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:01:55.539877  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:01:55.554719  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:01:55.594799  895269 pod_ready.go:103] pod "amd-gpu-device-plugin-nfjdc" in "kube-system" namespace has status "Ready":"False"
	I0224 12:01:56.000630  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:01:56.000650  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:01:56.101999  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:01:56.102082  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:01:56.499638  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:01:56.500569  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:01:56.539508  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:01:56.554596  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:01:57.000556  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:01:57.001033  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:01:57.044053  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:01:57.056459  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:01:57.498488  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:01:57.499118  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:01:57.542814  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:01:57.555010  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:01:57.597647  895269 pod_ready.go:103] pod "amd-gpu-device-plugin-nfjdc" in "kube-system" namespace has status "Ready":"False"
	I0224 12:01:57.999988  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:01:58.000137  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:01:58.039113  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:01:58.055645  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:01:58.499549  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:01:58.500496  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:01:58.539759  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:01:58.554725  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:01:59.000360  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:01:59.000602  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:01:59.101745  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:01:59.103276  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:01:59.498937  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:01:59.500458  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:01:59.539310  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:01:59.555324  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:01:59.999930  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:00.001707  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:00.040319  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:00.055984  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:00.094694  895269 pod_ready.go:103] pod "amd-gpu-device-plugin-nfjdc" in "kube-system" namespace has status "Ready":"False"
	I0224 12:02:00.500879  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:00.501012  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:00.540649  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:00.556080  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:00.998170  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:01.000109  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:01.038841  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:01.055138  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:01.498542  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:01.500301  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:01.538979  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:01.555104  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:02.001155  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:02.001497  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:02.039758  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:02.059226  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:02.096237  895269 pod_ready.go:103] pod "amd-gpu-device-plugin-nfjdc" in "kube-system" namespace has status "Ready":"False"
	I0224 12:02:02.499874  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:02.500539  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:02.539671  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:02.555033  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:03.000219  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:03.000555  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:03.039929  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:03.529362  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:03.529587  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:03.529679  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:03.539275  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:03.555799  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:04.000851  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:04.000858  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:04.041404  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:04.055618  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:04.498852  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:04.500741  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:04.539792  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:04.554872  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:04.594482  895269 pod_ready.go:103] pod "amd-gpu-device-plugin-nfjdc" in "kube-system" namespace has status "Ready":"False"
	I0224 12:02:04.999241  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:05.001001  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:05.039620  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:05.055362  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:05.499695  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:05.500297  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:05.539087  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:05.555321  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:05.998946  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:05.999538  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:06.039755  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:06.054877  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:06.501230  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:06.502180  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:06.542583  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:06.556292  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:06.596151  895269 pod_ready.go:103] pod "amd-gpu-device-plugin-nfjdc" in "kube-system" namespace has status "Ready":"False"
	I0224 12:02:07.138458  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:07.138882  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:07.138996  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:07.139030  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:07.499907  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:07.500036  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:07.538847  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:07.554984  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:07.998363  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:08.000829  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:08.039669  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:08.054938  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:08.500723  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:08.501020  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:08.539048  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:08.555262  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:09.001006  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:09.001008  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:09.040310  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:09.055895  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:09.095938  895269 pod_ready.go:103] pod "amd-gpu-device-plugin-nfjdc" in "kube-system" namespace has status "Ready":"False"
	I0224 12:02:09.499490  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:09.500022  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:09.540618  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:09.555287  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:09.999797  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:10.000744  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:10.039483  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:10.056292  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:10.499978  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:10.500678  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:10.539737  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:10.554839  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:10.998592  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:10.999308  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:11.039647  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:11.056017  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:11.100196  895269 pod_ready.go:103] pod "amd-gpu-device-plugin-nfjdc" in "kube-system" namespace has status "Ready":"False"
	I0224 12:02:11.601774  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:11.602371  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:11.602424  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:11.602515  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:11.999849  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:12.000575  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:12.039518  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:12.056239  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:12.499475  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:12.499488  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:12.539795  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:12.555042  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:13.000292  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:13.000797  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:13.040147  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:13.055842  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:13.500658  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:13.500772  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:13.540079  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:13.556530  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:13.595897  895269 pod_ready.go:103] pod "amd-gpu-device-plugin-nfjdc" in "kube-system" namespace has status "Ready":"False"
	I0224 12:02:13.999666  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:14.001255  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:14.039197  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:14.056396  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:14.499581  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:14.500004  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:14.539773  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:14.554759  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:15.000110  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:15.000120  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:15.039580  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:15.055197  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:15.499943  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:15.500640  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:15.539763  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:15.555222  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:15.595077  895269 pod_ready.go:93] pod "amd-gpu-device-plugin-nfjdc" in "kube-system" namespace has status "Ready":"True"
	I0224 12:02:15.595111  895269 pod_ready.go:82] duration metric: took 28.506233315s for pod "amd-gpu-device-plugin-nfjdc" in "kube-system" namespace to be "Ready" ...
	I0224 12:02:15.595125  895269 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-s2nlz" in "kube-system" namespace to be "Ready" ...
	I0224 12:02:15.596768  895269 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-s2nlz" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-s2nlz" not found
	I0224 12:02:15.596794  895269 pod_ready.go:82] duration metric: took 1.660362ms for pod "coredns-668d6bf9bc-s2nlz" in "kube-system" namespace to be "Ready" ...
	E0224 12:02:15.596806  895269 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-s2nlz" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-s2nlz" not found
	I0224 12:02:15.596813  895269 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-whkc9" in "kube-system" namespace to be "Ready" ...
	I0224 12:02:15.601705  895269 pod_ready.go:93] pod "coredns-668d6bf9bc-whkc9" in "kube-system" namespace has status "Ready":"True"
	I0224 12:02:15.601735  895269 pod_ready.go:82] duration metric: took 4.912333ms for pod "coredns-668d6bf9bc-whkc9" in "kube-system" namespace to be "Ready" ...
	I0224 12:02:15.601772  895269 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-641952" in "kube-system" namespace to be "Ready" ...
	I0224 12:02:15.606223  895269 pod_ready.go:93] pod "etcd-addons-641952" in "kube-system" namespace has status "Ready":"True"
	I0224 12:02:15.606250  895269 pod_ready.go:82] duration metric: took 4.468647ms for pod "etcd-addons-641952" in "kube-system" namespace to be "Ready" ...
	I0224 12:02:15.606266  895269 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-641952" in "kube-system" namespace to be "Ready" ...
	I0224 12:02:15.609836  895269 pod_ready.go:93] pod "kube-apiserver-addons-641952" in "kube-system" namespace has status "Ready":"True"
	I0224 12:02:15.609863  895269 pod_ready.go:82] duration metric: took 3.583917ms for pod "kube-apiserver-addons-641952" in "kube-system" namespace to be "Ready" ...
	I0224 12:02:15.609876  895269 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-641952" in "kube-system" namespace to be "Ready" ...
	I0224 12:02:15.792288  895269 pod_ready.go:93] pod "kube-controller-manager-addons-641952" in "kube-system" namespace has status "Ready":"True"
	I0224 12:02:15.792323  895269 pod_ready.go:82] duration metric: took 182.438756ms for pod "kube-controller-manager-addons-641952" in "kube-system" namespace to be "Ready" ...
	I0224 12:02:15.792341  895269 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xjthf" in "kube-system" namespace to be "Ready" ...
	I0224 12:02:15.999908  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:15.999965  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:16.040627  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:16.055075  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:16.192304  895269 pod_ready.go:93] pod "kube-proxy-xjthf" in "kube-system" namespace has status "Ready":"True"
	I0224 12:02:16.192334  895269 pod_ready.go:82] duration metric: took 399.984764ms for pod "kube-proxy-xjthf" in "kube-system" namespace to be "Ready" ...
	I0224 12:02:16.192345  895269 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-641952" in "kube-system" namespace to be "Ready" ...
	I0224 12:02:16.500238  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:16.500313  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:16.539482  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:16.555558  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:16.592527  895269 pod_ready.go:93] pod "kube-scheduler-addons-641952" in "kube-system" namespace has status "Ready":"True"
	I0224 12:02:16.592554  895269 pod_ready.go:82] duration metric: took 400.202461ms for pod "kube-scheduler-addons-641952" in "kube-system" namespace to be "Ready" ...
	I0224 12:02:16.592567  895269 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-7fbb699795-wzbn9" in "kube-system" namespace to be "Ready" ...
	I0224 12:02:17.000221  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:17.003493  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:17.039084  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:17.056057  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:17.498680  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:17.499169  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:17.538939  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:17.555122  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:18.000466  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:18.000590  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:18.039651  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:18.055073  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:18.500418  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:18.500565  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:18.539428  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:18.556000  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:18.598876  895269 pod_ready.go:103] pod "metrics-server-7fbb699795-wzbn9" in "kube-system" namespace has status "Ready":"False"
	I0224 12:02:18.999930  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:19.000263  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:19.039072  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:19.055236  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:19.501230  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:19.501229  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:19.540069  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:19.556000  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:19.999717  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:20.000352  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:20.039832  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:20.055413  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:20.499669  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:20.499729  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:20.539786  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:20.555167  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:20.598911  895269 pod_ready.go:103] pod "metrics-server-7fbb699795-wzbn9" in "kube-system" namespace has status "Ready":"False"
	I0224 12:02:21.000809  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:21.001123  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:21.039803  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:21.055831  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:21.501842  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:21.505894  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:21.539618  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:21.556241  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:21.999281  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:22.000304  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:22.039495  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:22.055880  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:22.498277  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:22.500072  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:22.538595  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:22.554900  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:22.999742  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:22.999963  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:23.038987  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:23.055178  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:23.098614  895269 pod_ready.go:103] pod "metrics-server-7fbb699795-wzbn9" in "kube-system" namespace has status "Ready":"False"
	I0224 12:02:23.499178  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:23.500283  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:23.539308  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:23.555780  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:24.000749  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:24.000962  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:24.039915  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:24.055123  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:24.499797  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:24.500111  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:24.539151  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:24.556347  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:24.998885  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:25.000790  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:25.039864  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:25.055631  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:25.099055  895269 pod_ready.go:103] pod "metrics-server-7fbb699795-wzbn9" in "kube-system" namespace has status "Ready":"False"
	I0224 12:02:25.499525  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:25.500342  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:25.539045  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:25.555465  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:25.999552  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:25.999808  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:26.040237  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:26.055635  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:26.499542  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:26.500276  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:26.539575  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:26.554972  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:26.999537  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:27.000567  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:27.039787  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:27.055535  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:27.099177  895269 pod_ready.go:103] pod "metrics-server-7fbb699795-wzbn9" in "kube-system" namespace has status "Ready":"False"
	I0224 12:02:27.498659  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:27.501225  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:27.539448  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:27.555545  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:28.000058  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:28.000457  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:28.039356  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:28.055836  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:28.499085  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:28.500273  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:28.539177  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:28.555770  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:29.003508  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:02:29.003569  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:29.039556  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:29.055313  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:29.099711  895269 pod_ready.go:103] pod "metrics-server-7fbb699795-wzbn9" in "kube-system" namespace has status "Ready":"False"
	I0224 12:02:29.501999  895269 kapi.go:107] duration metric: took 42.506834558s to wait for kubernetes.io/minikube-addons=registry ...
	I0224 12:02:29.502166  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:29.539194  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:29.555226  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:30.000306  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:30.039030  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:30.054977  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:30.500993  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:30.539866  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:30.554949  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:31.000491  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:31.039872  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:31.055034  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:31.500789  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:31.539697  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:31.554789  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:31.598313  895269 pod_ready.go:103] pod "metrics-server-7fbb699795-wzbn9" in "kube-system" namespace has status "Ready":"False"
	I0224 12:02:32.000060  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:32.039361  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:32.055568  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:32.500341  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:32.539312  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:32.555766  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:33.000130  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:33.040512  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:33.055808  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:33.499983  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:33.539774  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:33.555199  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:33.598811  895269 pod_ready.go:103] pod "metrics-server-7fbb699795-wzbn9" in "kube-system" namespace has status "Ready":"False"
	I0224 12:02:34.000874  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:34.040130  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:34.103498  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:34.499822  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:34.539777  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:34.556349  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:35.002524  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:35.039381  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:35.055938  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:35.506794  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:35.539963  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:35.555557  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:35.598915  895269 pod_ready.go:93] pod "metrics-server-7fbb699795-wzbn9" in "kube-system" namespace has status "Ready":"True"
	I0224 12:02:35.598941  895269 pod_ready.go:82] duration metric: took 19.006368049s for pod "metrics-server-7fbb699795-wzbn9" in "kube-system" namespace to be "Ready" ...
	I0224 12:02:35.598954  895269 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-4wfmt" in "kube-system" namespace to be "Ready" ...
	I0224 12:02:35.603524  895269 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-4wfmt" in "kube-system" namespace has status "Ready":"True"
	I0224 12:02:35.603549  895269 pod_ready.go:82] duration metric: took 4.588027ms for pod "nvidia-device-plugin-daemonset-4wfmt" in "kube-system" namespace to be "Ready" ...
	I0224 12:02:35.603569  895269 pod_ready.go:39] duration metric: took 48.587125169s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 12:02:35.603590  895269 api_server.go:52] waiting for apiserver process to appear ...
	I0224 12:02:35.603651  895269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 12:02:35.624020  895269 api_server.go:72] duration metric: took 57.969762657s to wait for apiserver process to appear ...
	I0224 12:02:35.624062  895269 api_server.go:88] waiting for apiserver healthz status ...
	I0224 12:02:35.624089  895269 api_server.go:253] Checking apiserver healthz at https://192.168.39.150:8443/healthz ...
	I0224 12:02:35.630911  895269 api_server.go:279] https://192.168.39.150:8443/healthz returned 200:
	ok
	I0224 12:02:35.632008  895269 api_server.go:141] control plane version: v1.32.2
	I0224 12:02:35.632033  895269 api_server.go:131] duration metric: took 7.963829ms to wait for apiserver health ...
	I0224 12:02:35.632042  895269 system_pods.go:43] waiting for kube-system pods to appear ...
	I0224 12:02:35.637450  895269 system_pods.go:59] 18 kube-system pods found
	I0224 12:02:35.637497  895269 system_pods.go:61] "amd-gpu-device-plugin-nfjdc" [1ecaf333-99d2-4202-8da4-1c45a0e08bf6] Running
	I0224 12:02:35.637507  895269 system_pods.go:61] "coredns-668d6bf9bc-whkc9" [1a1bb96b-2f4c-4069-8026-f326ae12884a] Running
	I0224 12:02:35.637524  895269 system_pods.go:61] "csi-hostpath-attacher-0" [4d9127c8-8f6a-4b86-9f26-8c0980ad15d2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0224 12:02:35.637535  895269 system_pods.go:61] "csi-hostpath-resizer-0" [c258058d-9590-4471-aa67-f700fe27369c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0224 12:02:35.637550  895269 system_pods.go:61] "csi-hostpathplugin-l25l6" [d9f95591-d226-40c0-a05b-280d3df8196b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0224 12:02:35.637567  895269 system_pods.go:61] "etcd-addons-641952" [1558a028-4368-4ffb-b4ae-b73b5bf71b2e] Running
	I0224 12:02:35.637580  895269 system_pods.go:61] "kube-apiserver-addons-641952" [e52a3759-cf12-4b73-b8f1-428aa3930216] Running
	I0224 12:02:35.637592  895269 system_pods.go:61] "kube-controller-manager-addons-641952" [bd0f81ee-c5a5-49c0-ba0d-b22db638a688] Running
	I0224 12:02:35.637604  895269 system_pods.go:61] "kube-ingress-dns-minikube" [2aab4809-4155-42d4-be8d-47e92ad19bbb] Running
	I0224 12:02:35.637616  895269 system_pods.go:61] "kube-proxy-xjthf" [90a81b91-f36e-497c-b72a-a9e751c4aaf4] Running
	I0224 12:02:35.637626  895269 system_pods.go:61] "kube-scheduler-addons-641952" [2bf8bf6a-19e8-4a33-b531-2790643e2270] Running
	I0224 12:02:35.637635  895269 system_pods.go:61] "metrics-server-7fbb699795-wzbn9" [19f41d09-6274-428d-a8b3-7910f74ef377] Running
	I0224 12:02:35.637642  895269 system_pods.go:61] "nvidia-device-plugin-daemonset-4wfmt" [a145392e-b0c5-483f-a61a-74bd39d39553] Running
	I0224 12:02:35.637650  895269 system_pods.go:61] "registry-6c88467877-2zl8t" [149aa981-d7d4-42b6-945a-6ab73052301b] Running
	I0224 12:02:35.637656  895269 system_pods.go:61] "registry-proxy-cqbs8" [3871d1d7-fffa-4ad5-b3a8-5e86e6392199] Running
	I0224 12:02:35.637675  895269 system_pods.go:61] "snapshot-controller-68b874b76f-fqfcm" [c4cd93ee-8c13-4e37-b55c-1f354cee0c0a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0224 12:02:35.637689  895269 system_pods.go:61] "snapshot-controller-68b874b76f-rkgzw" [04269895-727a-436f-a780-44dc89844082] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0224 12:02:35.637700  895269 system_pods.go:61] "storage-provisioner" [12c0c7f0-a2fe-4783-84cf-9aeef0a26d03] Running
	I0224 12:02:35.637714  895269 system_pods.go:74] duration metric: took 5.666242ms to wait for pod list to return data ...
	I0224 12:02:35.637723  895269 default_sa.go:34] waiting for default service account to be created ...
	I0224 12:02:35.641416  895269 default_sa.go:45] found service account: "default"
	I0224 12:02:35.641449  895269 default_sa.go:55] duration metric: took 3.676655ms for default service account to be created ...
	I0224 12:02:35.641461  895269 system_pods.go:116] waiting for k8s-apps to be running ...
	I0224 12:02:35.645564  895269 system_pods.go:86] 18 kube-system pods found
	I0224 12:02:35.645594  895269 system_pods.go:89] "amd-gpu-device-plugin-nfjdc" [1ecaf333-99d2-4202-8da4-1c45a0e08bf6] Running
	I0224 12:02:35.645600  895269 system_pods.go:89] "coredns-668d6bf9bc-whkc9" [1a1bb96b-2f4c-4069-8026-f326ae12884a] Running
	I0224 12:02:35.645608  895269 system_pods.go:89] "csi-hostpath-attacher-0" [4d9127c8-8f6a-4b86-9f26-8c0980ad15d2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0224 12:02:35.645614  895269 system_pods.go:89] "csi-hostpath-resizer-0" [c258058d-9590-4471-aa67-f700fe27369c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0224 12:02:35.645622  895269 system_pods.go:89] "csi-hostpathplugin-l25l6" [d9f95591-d226-40c0-a05b-280d3df8196b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0224 12:02:35.645627  895269 system_pods.go:89] "etcd-addons-641952" [1558a028-4368-4ffb-b4ae-b73b5bf71b2e] Running
	I0224 12:02:35.645631  895269 system_pods.go:89] "kube-apiserver-addons-641952" [e52a3759-cf12-4b73-b8f1-428aa3930216] Running
	I0224 12:02:35.645634  895269 system_pods.go:89] "kube-controller-manager-addons-641952" [bd0f81ee-c5a5-49c0-ba0d-b22db638a688] Running
	I0224 12:02:35.645641  895269 system_pods.go:89] "kube-ingress-dns-minikube" [2aab4809-4155-42d4-be8d-47e92ad19bbb] Running
	I0224 12:02:35.645644  895269 system_pods.go:89] "kube-proxy-xjthf" [90a81b91-f36e-497c-b72a-a9e751c4aaf4] Running
	I0224 12:02:35.645648  895269 system_pods.go:89] "kube-scheduler-addons-641952" [2bf8bf6a-19e8-4a33-b531-2790643e2270] Running
	I0224 12:02:35.645651  895269 system_pods.go:89] "metrics-server-7fbb699795-wzbn9" [19f41d09-6274-428d-a8b3-7910f74ef377] Running
	I0224 12:02:35.645654  895269 system_pods.go:89] "nvidia-device-plugin-daemonset-4wfmt" [a145392e-b0c5-483f-a61a-74bd39d39553] Running
	I0224 12:02:35.645657  895269 system_pods.go:89] "registry-6c88467877-2zl8t" [149aa981-d7d4-42b6-945a-6ab73052301b] Running
	I0224 12:02:35.645660  895269 system_pods.go:89] "registry-proxy-cqbs8" [3871d1d7-fffa-4ad5-b3a8-5e86e6392199] Running
	I0224 12:02:35.645667  895269 system_pods.go:89] "snapshot-controller-68b874b76f-fqfcm" [c4cd93ee-8c13-4e37-b55c-1f354cee0c0a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0224 12:02:35.645672  895269 system_pods.go:89] "snapshot-controller-68b874b76f-rkgzw" [04269895-727a-436f-a780-44dc89844082] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0224 12:02:35.645677  895269 system_pods.go:89] "storage-provisioner" [12c0c7f0-a2fe-4783-84cf-9aeef0a26d03] Running
	I0224 12:02:35.645684  895269 system_pods.go:126] duration metric: took 4.217521ms to wait for k8s-apps to be running ...
	I0224 12:02:35.645692  895269 system_svc.go:44] waiting for kubelet service to be running ....
	I0224 12:02:35.645737  895269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 12:02:35.664456  895269 system_svc.go:56] duration metric: took 18.744814ms WaitForService to wait for kubelet
	I0224 12:02:35.664504  895269 kubeadm.go:582] duration metric: took 58.010252944s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0224 12:02:35.664539  895269 node_conditions.go:102] verifying NodePressure condition ...
	I0224 12:02:35.668007  895269 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0224 12:02:35.668042  895269 node_conditions.go:123] node cpu capacity is 2
	I0224 12:02:35.668057  895269 node_conditions.go:105] duration metric: took 3.5131ms to run NodePressure ...
	I0224 12:02:35.668068  895269 start.go:241] waiting for startup goroutines ...
	I0224 12:02:36.000695  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:36.039510  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:36.054471  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:36.500528  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:36.539115  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:36.555118  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:37.310199  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:37.310301  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:37.310545  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:37.499292  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:37.539059  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:37.555060  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:38.000457  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:38.042484  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:38.055815  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:38.500982  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:38.540490  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:38.555491  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:38.999432  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:39.039312  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:39.055620  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:39.499588  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:39.539722  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:39.554593  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:40.000676  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:40.101875  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:40.102325  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:40.501350  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:40.539169  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:40.555981  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:41.000664  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:41.040839  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:41.054887  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:41.500492  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:41.539710  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:41.554571  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:42.000731  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:42.039500  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:42.055877  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:42.500583  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:42.539538  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:42.554742  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:43.000513  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:43.039760  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:43.054990  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:43.500470  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:43.539472  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:43.554713  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:44.292889  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:44.292889  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:44.296429  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:44.507532  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:44.539230  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:44.555690  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:45.000658  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:45.039538  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:45.054551  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:45.499859  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:45.541044  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:45.555321  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:45.999336  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:46.039262  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:46.055624  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:46.502928  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:46.539861  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:46.556494  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:47.085959  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:47.086033  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:47.087114  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:47.500657  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:47.539598  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:47.558024  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:48.000094  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:48.039614  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:48.054743  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:48.500751  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:48.539755  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:48.555535  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:49.000043  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:49.038816  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:49.054762  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:49.501382  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:49.601844  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:49.601990  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:50.001070  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:50.041164  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:50.058946  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:50.500965  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:50.539502  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:50.554500  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:51.002426  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:51.039260  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:51.055395  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:51.501524  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:51.603096  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:51.603800  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:52.005210  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:52.042082  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:52.067506  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:52.500218  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:52.539130  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:52.555403  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:53.000209  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:53.039065  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:53.055870  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:53.501177  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:53.538835  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:53.558246  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:54.000918  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:54.040297  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:54.055637  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:54.501077  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:54.539050  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:54.555883  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:55.017447  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:55.041323  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:55.104830  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:55.500030  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:55.539669  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:55.554889  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:56.000138  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:56.038997  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:56.055540  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:56.499827  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:56.539685  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:56.554774  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:57.000814  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:57.038963  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:57.055985  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:57.500649  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:57.539502  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:57.555419  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:57.999875  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:58.041299  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:58.056202  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:58.557885  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:58.557936  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:58.561481  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:59.000679  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:59.040769  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:59.056241  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:02:59.499959  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:02:59.538941  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:02:59.566371  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:03:00.000076  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:03:00.038964  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:00.055078  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:03:00.500377  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:03:00.539416  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:00.555800  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:03:01.000665  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:03:01.040861  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:01.055501  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:03:01.499997  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:03:01.618948  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:01.619469  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:03:02.000796  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:03:02.039781  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:02.055095  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:03:02.500778  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:03:02.539663  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:02.554671  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:03:03.000804  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:03:03.039800  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:03.055234  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:03:03.500624  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:03:03.600842  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:03.600978  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:03:04.000661  895269 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:03:04.039668  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:04.055020  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:03:04.500994  895269 kapi.go:107] duration metric: took 1m17.504605374s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0224 12:03:04.538858  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:04.556105  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:03:05.108232  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:03:05.108965  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:05.539782  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:05.555354  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:03:06.040038  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:06.055928  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:03:06.539305  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:06.555489  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:03:07.039847  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:07.055109  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:03:07.539859  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:07.555090  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:03:08.042611  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:08.144490  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:03:08.540641  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:08.573290  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:03:09.039630  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:09.054807  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:03:09.539829  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:09.556056  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:03:10.039737  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:10.054673  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:03:10.539589  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:10.554478  895269 kapi.go:107] duration metric: took 1m21.003091631s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0224 12:03:11.040720  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:11.539368  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:12.039298  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:12.540190  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:13.040546  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:13.539020  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:14.040287  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:14.539607  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:15.039824  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:15.540241  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:16.040338  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:16.539775  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:17.039581  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:17.539834  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:18.039748  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:18.540431  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:19.039357  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:19.539584  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:20.039304  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:20.539182  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:21.039820  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:21.539268  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:22.040927  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:22.540277  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:23.040259  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:23.539944  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:24.039782  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:24.539498  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:25.040357  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:25.540789  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:26.039912  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:26.539544  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:27.039225  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:27.540523  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:28.039107  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:28.540059  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:29.042374  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:29.539581  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:30.039377  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:30.538985  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:31.039947  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:31.539403  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:32.039506  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:32.539956  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:33.039448  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:33.539561  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:34.040145  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:34.539764  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:35.039876  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:35.540627  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:36.039531  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:36.540870  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:37.039731  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:37.539749  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:38.039724  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:38.539645  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:39.039308  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:39.539287  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:40.039919  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:40.541062  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:41.040033  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:41.539638  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:42.039847  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:42.540742  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:43.039846  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:43.540517  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:44.039105  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:44.539631  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:45.039576  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:45.540219  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:46.040330  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:46.541374  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:47.039019  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:47.540328  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:48.040618  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:48.539501  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:49.039180  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:49.540371  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:50.039424  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:50.539382  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:51.038913  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:51.539570  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:52.039661  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:52.539512  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:53.039914  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:53.540112  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:54.039838  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:54.539538  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:55.038971  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:55.540047  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:56.039714  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:56.540589  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:57.039009  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:57.539906  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:58.040146  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:58.540302  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:59.039792  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:03:59.539434  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:04:00.040360  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:04:00.540035  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:04:01.039819  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:04:01.543924  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:04:02.040473  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:04:02.542558  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:04:03.039702  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:04:03.540431  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:04:04.039640  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:04:04.539235  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:04:05.040620  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:04:05.539675  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:04:06.040539  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:04:06.540565  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:04:07.040042  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:04:07.539990  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:04:08.040796  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:04:08.540098  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:04:09.040993  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:04:09.539842  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:04:10.039664  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:04:10.538970  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:04:11.039771  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:04:11.541985  895269 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:04:12.039979  895269 kapi.go:107] duration metric: took 2m21.004082466s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0224 12:04:12.041928  895269 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-641952 cluster.
	I0224 12:04:12.043264  895269 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0224 12:04:12.044375  895269 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0224 12:04:12.045839  895269 out.go:177] * Enabled addons: ingress-dns, metrics-server, inspektor-gadget, yakd, amd-gpu-device-plugin, storage-provisioner, cloud-spanner, nvidia-device-plugin, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0224 12:04:12.047122  895269 addons.go:514] duration metric: took 2m34.392874638s for enable addons: enabled=[ingress-dns metrics-server inspektor-gadget yakd amd-gpu-device-plugin storage-provisioner cloud-spanner nvidia-device-plugin default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0224 12:04:12.047167  895269 start.go:246] waiting for cluster config update ...
	I0224 12:04:12.047190  895269 start.go:255] writing updated cluster config ...
	I0224 12:04:12.047512  895269 ssh_runner.go:195] Run: rm -f paused
	I0224 12:04:12.106543  895269 start.go:600] kubectl: 1.32.2, cluster: 1.32.2 (minor skew: 0)
	I0224 12:04:12.108559  895269 out.go:177] * Done! kubectl is now configured to use "addons-641952" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 24 12:07:18 addons-641952 crio[665]: time="2025-02-24 12:07:18.544911421Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1740398838544879370,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595375,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=891b1d06-bd77-4211-bc23-9aa7d01bc7bd name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 12:07:18 addons-641952 crio[665]: time="2025-02-24 12:07:18.548647576Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a4dd6ee4-2a96-4ae0-a76f-60be47640ae4 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 12:07:18 addons-641952 crio[665]: time="2025-02-24 12:07:18.548710524Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a4dd6ee4-2a96-4ae0-a76f-60be47640ae4 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 12:07:18 addons-641952 crio[665]: time="2025-02-24 12:07:18.549154528Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:46637b048bf9d27b66452ad3f20749371b716fd74bbe43df3f5de8c3d3d2688b,PodSandboxId:55bc07ac82a00d25577aa8b329db2cd9304c9041e4aa085c2f851ee817da13d8,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1740398828867335663,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-cd8fcaca-bd54-49a6-9e22-383da91e5d0a,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: f63e0054-265b-4ddb-915b-17bf3efe01a3,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCoun
t: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1d8570f93e6aa5294cd0baf784ec948d58068b5100f7fbcaac9c50d3c2fde80,PodSandboxId:9bcff00971a0c1ecb4e31717ffce936004a1eaf66d577df37957c9eca1174f86,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:7a5342b7662db8de99e045a2b47b889c5701b8dde0ce5ae3f1577bf57a15ed40,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83ef53509aa521591f172ea20befd68f6624877539c03c5353172df8f2528ebb,State:CONTAINER_EXITED,CreatedAt:1740398825695028051,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4ea1014c-001b-4bb4-9910-2e215fd59077,},Annotations:map[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63a4b33e94f587ae95d326b661b765259a6f877ed10bda4c055b3295416f7030,PodSandboxId:4a8330fa9ac4ae0360dccaac8403d474ef27ecacbc63ce4c844ec991c232659d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1740398699298269186,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3cd89051-3c4b-48a8-a918-4fe1e668d737,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7125aea13d74bfc67660ebb7d6717d5aec6412dcddd72218e6eb1b068cea698,PodSandboxId:7daad2c22c8c888571477fb8e7353913a477679bfa18c197a9d4481b817c66d3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1740398656497575580,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1998d197-253b-4bf6-8a26-38cb3521fb90,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:798959adc4dd43a3997f9880e8d6701f8d084179c35695c2b95f7c274b0fd4a9,PodSandboxId:0420610b011de8b8a8af365f07b3a044b6d588d754404091359c5d68ad62e315,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1740398583161359193,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-ztj5d,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6a4c4d50-ed1d-4d47-8615-00d023bdfde4,},Annotations:map[string]string{io.kubernet
es.container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:fb566ca36535a641811c82d96ec79e454684cf2aedf47277f25c043dedb57ac9,PodSandboxId:9b1088fee8bbcb77487811e396dad3a1bb41ea4dedba7b064136fdb318507f69,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb4214
6a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_EXITED,CreatedAt:1740398573146356887,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d9127c8-8f6a-4b86-9f26-8c0980ad15d2,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63006476ab53473ea603f89c4cb385ad210beebdc1de53ba608c2d9fb58005ff,PodSandboxId:19e8ba8c7c034a42efb9eae76dd265e8b5d96b91743ef6406e8d800ddf0e5044,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1740398569547793415,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-czfq4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e2878006-ee1b-48e7-964d-c5993adca345,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e0d9f4c44360e63518674162dcb00f459662a02f9b2d91ff6fafd6411f85b9c,PodSandboxId:3c0f2165abe7235fcb54a9ad9ed76252df96d69f720d9bf7b9f06a7b1e1097b1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1740398569426301329,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-wppmd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3ac8a794-0594-4f52-9913-0be11fa69de1,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51f9fd7f6a836ad438b4303de06bb332627cb14c5703aa02db6e17745d264375,PodSandboxId:4781811988f436ec4e820e183ec3732fc2ca2d40209cd534a6c8c3e6352021e2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79
f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1740398551768834291,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-kgtx5,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: b782d82c-5693-43f8-b6b5-6dd9468b7267,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402f4126fb4e53ed9cf1a8facb6f78763940b3ad887da2396d0442305c406748,PodSandboxId:714b63c4da11269424649ce6b9eae7ce04f730147a8a436c67e25ae50e0cfb85,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device
-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1740398534766061270,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-nfjdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ecaf333-99d2-4202-8da4-1c45a0e08bf6,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:534fcc6385a7f177784024250ff8f09d2c667f86c20880038f4646cf25c78101,PodSandboxId:1fcfd871af7dd31aa22c5f72fac0574f185a15b6aad7591074a0cec82ba5e1cb,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&Im
ageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1740398515960380242,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2aab4809-4155-42d4-be8d-47e92ad19bbb,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0b4badb07a579593b1352ddb4cffbda05a11ec6e7a5a2d2c58d2e221627cb49,PodSandboxId:0d9
bbbcc6be1d7367b5371c4d6922094325db5e99ca0532d0dc8a7d62d60cd8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1740398505161794402,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12c0c7f0-a2fe-4783-84cf-9aeef0a26d03,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69f3da9381e66b7eb8c071f0e1423d233bc845c4370607b5d40ee2452887bf98,PodSandboxId:6118d49b1375c91
4854a50fa84c60bc485c0822ad1169cb05790d60ce1e3130e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1740398501452682964,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-whkc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a1bb96b-2f4c-4069-8026-f326ae12884a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa1d1797df1b0c6bf9c9f50b997710be61d8f9c576ce4a30e9eee4f841fec9f2,PodSandboxId:bcddd30dd8203d30fdebc6b3d16b176d48d482dff580b84b341095efad9bc7c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1740398497423956573,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xjthf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90a81b91-f36e-497c-b72a-a9e751c4aaf4,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19575553b86ddf85e9c9ae196416ddeac77c7e21eeebcf10393bb6f0b90b1032,PodSandboxId:fc5dd696e09420ef6af723f262dce8f679d24c99490601b34029c7bab066efb8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1740398486898704449,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-641952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69666b73e7b9af958f3b3fd5678b8158,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:0c67b7530dae876fc6586a506fe5f5af22b581d83f734582e9a3e2a6726467b5,PodSandboxId:4842d18a7dfce0b76c1138131c894417b7efcf205708cadd5d0ddaa9b0780118,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1740398486845484788,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-641952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 441902e74881bb00f51f81a209c3b6c9,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:9c2ad9352e4c845e5bd85f50a5f442d656492f7034f2cbbf2977309146c590bd,PodSandboxId:1c568402605e42b19701bf87f34eaf7c715a753a18177ba7086a4379fb8b7fb9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1740398486810158938,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-641952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbdbb33b2e764fa091d66535c9d2b2b7,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8871773c8ca2a
1cb99abf2c228e0e647c2cc66df38323b597b5e03d7878ab691,PodSandboxId:41a8edbd7b39e3d3af8cc2eaa0f7d2a456579fd890a7fb4c67fb12239c3cd9b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1740398486813619706,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-641952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ec3b31462ab5c36c1244f199298bdb4,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-c
ollector/interceptors.go:74" id=a4dd6ee4-2a96-4ae0-a76f-60be47640ae4 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 12:07:18 addons-641952 crio[665]: time="2025-02-24 12:07:18.607936309Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f81d68b0-f419-4978-b0d2-f7bcdc06e90d name=/runtime.v1.RuntimeService/ListPodSandbox
	Feb 24 12:07:18 addons-641952 crio[665]: time="2025-02-24 12:07:18.608278497Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:1120c3d39448bfbaa5d5cbae88330b3e485d4424091ba1e09564346f6d58aacf,Metadata:&PodSandboxMetadata{Name:hello-world-app-7d9564db4-nvfrp,Uid:2956af2e-42b0-4231-9b3a-e00bb389b404,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1740398837551521806,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-7d9564db4-nvfrp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2956af2e-42b0-4231-9b3a-e00bb389b404,pod-template-hash: 7d9564db4,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-24T12:07:17.241358209Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4a8330fa9ac4ae0360dccaac8403d474ef27ecacbc63ce4c844ec991c232659d,Metadata:&PodSandboxMetadata{Name:nginx,Uid:3cd89051-3c4b-48a8-a918-4fe1e668d737,Namespace:default,Attempt:0,},St
ate:SANDBOX_READY,CreatedAt:1740398694489826213,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3cd89051-3c4b-48a8-a918-4fe1e668d737,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-24T12:04:53.994779413Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7daad2c22c8c888571477fb8e7353913a477679bfa18c197a9d4481b817c66d3,Metadata:&PodSandboxMetadata{Name:busybox,Uid:1998d197-253b-4bf6-8a26-38cb3521fb90,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1740398653009539867,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1998d197-253b-4bf6-8a26-38cb3521fb90,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-24T12:04:12.688220486Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0420610b011de8b8a8af3
65f07b3a044b6d588d754404091359c5d68ad62e315,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-56d7c84fd4-ztj5d,Uid:6a4c4d50-ed1d-4d47-8615-00d023bdfde4,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_READY,CreatedAt:1740398571181122244,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-ztj5d,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6a4c4d50-ed1d-4d47-8615-00d023bdfde4,pod-template-hash: 56d7c84fd4,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-24T12:01:46.713591232Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0d9bbbcc6be1d7367b5371c4d6922094325db5e99ca0532d0dc8a7d62d60cd8e,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:12c0c7f0-a2fe-4783-84cf-9aeef0a26d03,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,
CreatedAt:1740398504118500513,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12c0c7f0-a2fe-4783-84cf-9aeef0a26d03,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"D
irectory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-02-24T12:01:43.486093483Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4781811988f436ec4e820e183ec3732fc2ca2d40209cd534a6c8c3e6352021e2,Metadata:&PodSandboxMetadata{Name:local-path-provisioner-76f89f99b5-kgtx5,Uid:b782d82c-5693-43f8-b6b5-6dd9468b7267,Namespace:local-path-storage,Attempt:0,},State:SANDBOX_READY,CreatedAt:1740398504007372415,Labels:map[string]string{app: local-path-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-kgtx5,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: b782d82c-5693-43f8-b6b5-6dd9468b7267,pod-template-hash: 76f89f99b5,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-24T12:01:43.372788097Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1fcfd871af7dd31aa22c5f72fac0574f185a15b6aad7591074a0cec82ba5e1cb,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:2aab4809-4155-42
d4-be8d-47e92ad19bbb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1740398502503277850,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2aab4809-4155-42d4-be8d-47e92ad19bbb,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c\",\"imagePullPolicy\":\"IfNotPrese
nt\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"protocol\":\"UDP\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\"}}\n,kubernetes.io/config.seen: 2025-02-24T12:01:42.084222981Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:714b63c4da11269424649ce6b9eae7ce04f730147a8a436c67e25ae50e0cfb85,Metadata:&PodSandboxMetadata{Name:amd-gpu-device-plugin-nfjdc,Uid:1ecaf333-99d2-4202-8da4-1c45a0e08bf6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1740398500589609443,Labels:map[string]string{controller-revision-hash: 578b4c597,io.kubernetes.container.name: POD,io.kubernetes.pod.name: amd-gpu-device-plugin-nfjdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ecaf333-99d2-4202-8da4-1c45a0e08bf6,k8s-app: amd-gpu-device-plugin,name: amd-gpu-device-plugin,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-24T12:01:39.968502142Z,kubernetes.io/config.source: api,},RuntimeHandler:,}
,&PodSandbox{Id:6118d49b1375c914854a50fa84c60bc485c0822ad1169cb05790d60ce1e3130e,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-whkc9,Uid:1a1bb96b-2f4c-4069-8026-f326ae12884a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1740398497613792073,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-whkc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a1bb96b-2f4c-4069-8026-f326ae12884a,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-24T12:01:37.302017722Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bcddd30dd8203d30fdebc6b3d16b176d48d482dff580b84b341095efad9bc7c5,Metadata:&PodSandboxMetadata{Name:kube-proxy-xjthf,Uid:90a81b91-f36e-497c-b72a-a9e751c4aaf4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1740398497267536159,Labels:map[string]string{controller-revision-hash: 7bb84c4984,io.kubernetes.container.name: POD,io.kube
rnetes.pod.name: kube-proxy-xjthf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90a81b91-f36e-497c-b72a-a9e751c4aaf4,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-24T12:01:36.955914126Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1c568402605e42b19701bf87f34eaf7c715a753a18177ba7086a4379fb8b7fb9,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-641952,Uid:bbdbb33b2e764fa091d66535c9d2b2b7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1740398486625356944,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-641952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbdbb33b2e764fa091d66535c9d2b2b7,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: bbdbb33b2e764fa091d66535c9d2b2b7,kubernetes.io/config.seen: 2025-02-24T12:01:26.139091664Z,kubernetes.io/config.source: f
ile,},RuntimeHandler:,},&PodSandbox{Id:41a8edbd7b39e3d3af8cc2eaa0f7d2a456579fd890a7fb4c67fb12239c3cd9b6,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-641952,Uid:7ec3b31462ab5c36c1244f199298bdb4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1740398486618465823,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-641952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ec3b31462ab5c36c1244f199298bdb4,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7ec3b31462ab5c36c1244f199298bdb4,kubernetes.io/config.seen: 2025-02-24T12:01:26.139090575Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4842d18a7dfce0b76c1138131c894417b7efcf205708cadd5d0ddaa9b0780118,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-641952,Uid:441902e74881bb00f51f81a209c3b6c9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:174039
8486616588338,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-641952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 441902e74881bb00f51f81a209c3b6c9,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.150:8443,kubernetes.io/config.hash: 441902e74881bb00f51f81a209c3b6c9,kubernetes.io/config.seen: 2025-02-24T12:01:26.139089236Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fc5dd696e09420ef6af723f262dce8f679d24c99490601b34029c7bab066efb8,Metadata:&PodSandboxMetadata{Name:etcd-addons-641952,Uid:69666b73e7b9af958f3b3fd5678b8158,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1740398486595111009,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-641952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69666b73e7b9af958f3b3fd5678b81
58,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.150:2379,kubernetes.io/config.hash: 69666b73e7b9af958f3b3fd5678b8158,kubernetes.io/config.seen: 2025-02-24T12:01:26.139085089Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=f81d68b0-f419-4978-b0d2-f7bcdc06e90d name=/runtime.v1.RuntimeService/ListPodSandbox
	Feb 24 12:07:18 addons-641952 crio[665]: time="2025-02-24 12:07:18.609493079Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5155e4ef-6080-4692-bad4-d4a7e379b77b name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 12:07:18 addons-641952 crio[665]: time="2025-02-24 12:07:18.609571891Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5155e4ef-6080-4692-bad4-d4a7e379b77b name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 12:07:18 addons-641952 crio[665]: time="2025-02-24 12:07:18.609896985Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:63a4b33e94f587ae95d326b661b765259a6f877ed10bda4c055b3295416f7030,PodSandboxId:4a8330fa9ac4ae0360dccaac8403d474ef27ecacbc63ce4c844ec991c232659d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1740398699298269186,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3cd89051-3c4b-48a8-a918-4fe1e668d737,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7125aea13d74bfc67660ebb7d6717d5aec6412dcddd72218e6eb1b068cea698,PodSandboxId:7daad2c22c8c888571477fb8e7353913a477679bfa18c197a9d4481b817c66d3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1740398656497575580,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1998d197-253b-4bf6-8a26-38cb3521fb90,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:798959adc4dd43a3997f9880e8d6701f8d084179c35695c2b95f7c274b0fd4a9,PodSandboxId:0420610b011de8b8a8af365f07b3a044b6d588d754404091359c5d68ad62e315,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1740398583161359193,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-ztj5d,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6a4c4d50-ed1d-4d47-8615-00d023bdfde4,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:51f9fd7f6a836ad438b4303de06bb332627cb14c5703aa02db6e17745d264375,PodSandboxId:4781811988f436ec4e820e183ec3732fc2ca2d40209cd534a6c8c3e6352021e2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e1
6d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1740398551768834291,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-kgtx5,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: b782d82c-5693-43f8-b6b5-6dd9468b7267,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402f4126fb4e53ed9cf1a8facb6f78763940b3ad887da2396d0442305c406748,PodSandboxId:714b63c4da11269424649ce6b9eae7ce04f730147a8a436c67e25ae50e0cfb85,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1740398534766061270,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-nfjdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ecaf333-99d2-4202-8da4-1c45a0e08bf6,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:534fcc6385a7f177784024250ff8f09d2c667f86c20880038f4646cf25c78101,PodSandboxId:1fcfd871af7dd31aa22c5f72fac0574f185a15b6aad7591074a0cec82ba5e1cb,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17
e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1740398515960380242,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2aab4809-4155-42d4-be8d-47e92ad19bbb,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0b4badb07a579593b1352ddb4cffbda05a11ec6e7a5a2d2c58d2e221627cb49,PodSandboxId:0d9bbbcc6be1d7367b5371c4d6922094325db5e99ca0532d0dc8a7d62d60cd8e,Metadata:&ContainerMetadata{Name:storag
e-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1740398505161794402,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12c0c7f0-a2fe-4783-84cf-9aeef0a26d03,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69f3da9381e66b7eb8c071f0e1423d233bc845c4370607b5d40ee2452887bf98,PodSandboxId:6118d49b1375c914854a50fa84c60bc485c0822ad1169cb05790d60ce1e3130e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,
},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1740398501452682964,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-whkc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a1bb96b-2f4c-4069-8026-f326ae12884a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id
:aa1d1797df1b0c6bf9c9f50b997710be61d8f9c576ce4a30e9eee4f841fec9f2,PodSandboxId:bcddd30dd8203d30fdebc6b3d16b176d48d482dff580b84b341095efad9bc7c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1740398497423956573,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xjthf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90a81b91-f36e-497c-b72a-a9e751c4aaf4,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19575553b86ddf85e9c9ae196416ddeac
77c7e21eeebcf10393bb6f0b90b1032,PodSandboxId:fc5dd696e09420ef6af723f262dce8f679d24c99490601b34029c7bab066efb8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1740398486898704449,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-641952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69666b73e7b9af958f3b3fd5678b8158,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c67b7530dae876fc6586a506fe5f5af22b581d83f734582e9a3e2a6726467b5,PodSandboxId:48
42d18a7dfce0b76c1138131c894417b7efcf205708cadd5d0ddaa9b0780118,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1740398486845484788,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-641952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 441902e74881bb00f51f81a209c3b6c9,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c2ad9352e4c845e5bd85f50a5f442d656492f7034f2cbbf2977309146c590bd,PodSandboxId:1c568402605e42b1970
1bf87f34eaf7c715a753a18177ba7086a4379fb8b7fb9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1740398486810158938,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-641952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbdbb33b2e764fa091d66535c9d2b2b7,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8871773c8ca2a1cb99abf2c228e0e647c2cc66df38323b597b5e03d7878ab691,PodSandboxId:41a8edbd7b39e3d3af8cc2eaa0f7d2a45657
9fd890a7fb4c67fb12239c3cd9b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1740398486813619706,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-641952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ec3b31462ab5c36c1244f199298bdb4,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5155e4ef-6080-4692-bad4-d4a7e379b77b name=/runtime.v1.RuntimeService/
ListContainers
	Feb 24 12:07:18 addons-641952 crio[665]: time="2025-02-24 12:07:18.617892044Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=12f29a1a-7b76-488b-8035-e1fc55789141 name=/runtime.v1.RuntimeService/Version
	Feb 24 12:07:18 addons-641952 crio[665]: time="2025-02-24 12:07:18.617956777Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=12f29a1a-7b76-488b-8035-e1fc55789141 name=/runtime.v1.RuntimeService/Version
	Feb 24 12:07:18 addons-641952 crio[665]: time="2025-02-24 12:07:18.620702267Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a88e42c6-9946-4a24-8d1f-a7bff75d8110 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 12:07:18 addons-641952 crio[665]: time="2025-02-24 12:07:18.621935879Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1740398838621908220,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595375,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a88e42c6-9946-4a24-8d1f-a7bff75d8110 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 12:07:18 addons-641952 crio[665]: time="2025-02-24 12:07:18.622490653Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d8bf4b3-ad71-467b-98bc-2bd8973c13ca name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 12:07:18 addons-641952 crio[665]: time="2025-02-24 12:07:18.622541498Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d8bf4b3-ad71-467b-98bc-2bd8973c13ca name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 12:07:18 addons-641952 crio[665]: time="2025-02-24 12:07:18.623082038Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:46637b048bf9d27b66452ad3f20749371b716fd74bbe43df3f5de8c3d3d2688b,PodSandboxId:55bc07ac82a00d25577aa8b329db2cd9304c9041e4aa085c2f851ee817da13d8,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1740398828867335663,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-cd8fcaca-bd54-49a6-9e22-383da91e5d0a,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: f63e0054-265b-4ddb-915b-17bf3efe01a3,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCoun
t: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1d8570f93e6aa5294cd0baf784ec948d58068b5100f7fbcaac9c50d3c2fde80,PodSandboxId:9bcff00971a0c1ecb4e31717ffce936004a1eaf66d577df37957c9eca1174f86,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:7a5342b7662db8de99e045a2b47b889c5701b8dde0ce5ae3f1577bf57a15ed40,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83ef53509aa521591f172ea20befd68f6624877539c03c5353172df8f2528ebb,State:CONTAINER_EXITED,CreatedAt:1740398825695028051,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4ea1014c-001b-4bb4-9910-2e215fd59077,},Annotations:map[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63a4b33e94f587ae95d326b661b765259a6f877ed10bda4c055b3295416f7030,PodSandboxId:4a8330fa9ac4ae0360dccaac8403d474ef27ecacbc63ce4c844ec991c232659d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1740398699298269186,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3cd89051-3c4b-48a8-a918-4fe1e668d737,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7125aea13d74bfc67660ebb7d6717d5aec6412dcddd72218e6eb1b068cea698,PodSandboxId:7daad2c22c8c888571477fb8e7353913a477679bfa18c197a9d4481b817c66d3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1740398656497575580,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1998d197-253b-4bf6-8a26-38cb3521fb90,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:798959adc4dd43a3997f9880e8d6701f8d084179c35695c2b95f7c274b0fd4a9,PodSandboxId:0420610b011de8b8a8af365f07b3a044b6d588d754404091359c5d68ad62e315,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1740398583161359193,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-ztj5d,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6a4c4d50-ed1d-4d47-8615-00d023bdfde4,},Annotations:map[string]string{io.kubernet
es.container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:fb566ca36535a641811c82d96ec79e454684cf2aedf47277f25c043dedb57ac9,PodSandboxId:9b1088fee8bbcb77487811e396dad3a1bb41ea4dedba7b064136fdb318507f69,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb4214
6a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_EXITED,CreatedAt:1740398573146356887,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d9127c8-8f6a-4b86-9f26-8c0980ad15d2,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63006476ab53473ea603f89c4cb385ad210beebdc1de53ba608c2d9fb58005ff,PodSandboxId:19e8ba8c7c034a42efb9eae76dd265e8b5d96b91743ef6406e8d800ddf0e5044,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1740398569547793415,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-czfq4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e2878006-ee1b-48e7-964d-c5993adca345,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e0d9f4c44360e63518674162dcb00f459662a02f9b2d91ff6fafd6411f85b9c,PodSandboxId:3c0f2165abe7235fcb54a9ad9ed76252df96d69f720d9bf7b9f06a7b1e1097b1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1740398569426301329,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-wppmd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3ac8a794-0594-4f52-9913-0be11fa69de1,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51f9fd7f6a836ad438b4303de06bb332627cb14c5703aa02db6e17745d264375,PodSandboxId:4781811988f436ec4e820e183ec3732fc2ca2d40209cd534a6c8c3e6352021e2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79
f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1740398551768834291,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-kgtx5,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: b782d82c-5693-43f8-b6b5-6dd9468b7267,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402f4126fb4e53ed9cf1a8facb6f78763940b3ad887da2396d0442305c406748,PodSandboxId:714b63c4da11269424649ce6b9eae7ce04f730147a8a436c67e25ae50e0cfb85,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device
-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1740398534766061270,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-nfjdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ecaf333-99d2-4202-8da4-1c45a0e08bf6,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:534fcc6385a7f177784024250ff8f09d2c667f86c20880038f4646cf25c78101,PodSandboxId:1fcfd871af7dd31aa22c5f72fac0574f185a15b6aad7591074a0cec82ba5e1cb,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&Im
ageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1740398515960380242,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2aab4809-4155-42d4-be8d-47e92ad19bbb,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0b4badb07a579593b1352ddb4cffbda05a11ec6e7a5a2d2c58d2e221627cb49,PodSandboxId:0d9
bbbcc6be1d7367b5371c4d6922094325db5e99ca0532d0dc8a7d62d60cd8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1740398505161794402,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12c0c7f0-a2fe-4783-84cf-9aeef0a26d03,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69f3da9381e66b7eb8c071f0e1423d233bc845c4370607b5d40ee2452887bf98,PodSandboxId:6118d49b1375c91
4854a50fa84c60bc485c0822ad1169cb05790d60ce1e3130e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1740398501452682964,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-whkc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a1bb96b-2f4c-4069-8026-f326ae12884a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa1d1797df1b0c6bf9c9f50b997710be61d8f9c576ce4a30e9eee4f841fec9f2,PodSandboxId:bcddd30dd8203d30fdebc6b3d16b176d48d482dff580b84b341095efad9bc7c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1740398497423956573,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xjthf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90a81b91-f36e-497c-b72a-a9e751c4aaf4,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19575553b86ddf85e9c9ae196416ddeac77c7e21eeebcf10393bb6f0b90b1032,PodSandboxId:fc5dd696e09420ef6af723f262dce8f679d24c99490601b34029c7bab066efb8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1740398486898704449,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-641952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69666b73e7b9af958f3b3fd5678b8158,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:0c67b7530dae876fc6586a506fe5f5af22b581d83f734582e9a3e2a6726467b5,PodSandboxId:4842d18a7dfce0b76c1138131c894417b7efcf205708cadd5d0ddaa9b0780118,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1740398486845484788,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-641952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 441902e74881bb00f51f81a209c3b6c9,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:9c2ad9352e4c845e5bd85f50a5f442d656492f7034f2cbbf2977309146c590bd,PodSandboxId:1c568402605e42b19701bf87f34eaf7c715a753a18177ba7086a4379fb8b7fb9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1740398486810158938,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-641952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbdbb33b2e764fa091d66535c9d2b2b7,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8871773c8ca2a
1cb99abf2c228e0e647c2cc66df38323b597b5e03d7878ab691,PodSandboxId:41a8edbd7b39e3d3af8cc2eaa0f7d2a456579fd890a7fb4c67fb12239c3cd9b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1740398486813619706,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-641952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ec3b31462ab5c36c1244f199298bdb4,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-c
ollector/interceptors.go:74" id=5d8bf4b3-ad71-467b-98bc-2bd8973c13ca name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 12:07:18 addons-641952 crio[665]: time="2025-02-24 12:07:18.668628200Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9e8c71aa-12b8-442e-910e-0cd39eace3c5 name=/runtime.v1.RuntimeService/Version
	Feb 24 12:07:18 addons-641952 crio[665]: time="2025-02-24 12:07:18.668719697Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9e8c71aa-12b8-442e-910e-0cd39eace3c5 name=/runtime.v1.RuntimeService/Version
	Feb 24 12:07:18 addons-641952 crio[665]: time="2025-02-24 12:07:18.670035359Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a3164782-73d9-4f58-90d1-d41d54b67fa2 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 12:07:18 addons-641952 crio[665]: time="2025-02-24 12:07:18.672148833Z" level=debug msg="Content-Type from manifest GET is \"application/vnd.docker.distribution.manifest.list.v2+json\"" file="docker/docker_client.go:964"
	Feb 24 12:07:18 addons-641952 crio[665]: time="2025-02-24 12:07:18.672317090Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" file="docker/docker_client.go:631"
	Feb 24 12:07:18 addons-641952 crio[665]: time="2025-02-24 12:07:18.672835700Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1740398838671371976,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595375,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a3164782-73d9-4f58-90d1-d41d54b67fa2 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 12:07:18 addons-641952 crio[665]: time="2025-02-24 12:07:18.676239236Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=57e131a5-5efc-462d-aaff-7c722087381e name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 12:07:18 addons-641952 crio[665]: time="2025-02-24 12:07:18.676313278Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=57e131a5-5efc-462d-aaff-7c722087381e name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 12:07:18 addons-641952 crio[665]: time="2025-02-24 12:07:18.678129689Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:46637b048bf9d27b66452ad3f20749371b716fd74bbe43df3f5de8c3d3d2688b,PodSandboxId:55bc07ac82a00d25577aa8b329db2cd9304c9041e4aa085c2f851ee817da13d8,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1740398828867335663,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-cd8fcaca-bd54-49a6-9e22-383da91e5d0a,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: f63e0054-265b-4ddb-915b-17bf3efe01a3,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCoun
t: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1d8570f93e6aa5294cd0baf784ec948d58068b5100f7fbcaac9c50d3c2fde80,PodSandboxId:9bcff00971a0c1ecb4e31717ffce936004a1eaf66d577df37957c9eca1174f86,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:7a5342b7662db8de99e045a2b47b889c5701b8dde0ce5ae3f1577bf57a15ed40,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83ef53509aa521591f172ea20befd68f6624877539c03c5353172df8f2528ebb,State:CONTAINER_EXITED,CreatedAt:1740398825695028051,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4ea1014c-001b-4bb4-9910-2e215fd59077,},Annotations:map[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63a4b33e94f587ae95d326b661b765259a6f877ed10bda4c055b3295416f7030,PodSandboxId:4a8330fa9ac4ae0360dccaac8403d474ef27ecacbc63ce4c844ec991c232659d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1740398699298269186,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3cd89051-3c4b-48a8-a918-4fe1e668d737,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7125aea13d74bfc67660ebb7d6717d5aec6412dcddd72218e6eb1b068cea698,PodSandboxId:7daad2c22c8c888571477fb8e7353913a477679bfa18c197a9d4481b817c66d3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1740398656497575580,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1998d197-253b-4bf6-8a26-38cb3521fb90,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:798959adc4dd43a3997f9880e8d6701f8d084179c35695c2b95f7c274b0fd4a9,PodSandboxId:0420610b011de8b8a8af365f07b3a044b6d588d754404091359c5d68ad62e315,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1740398583161359193,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-ztj5d,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6a4c4d50-ed1d-4d47-8615-00d023bdfde4,},Annotations:map[string]string{io.kubernet
es.container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:fb566ca36535a641811c82d96ec79e454684cf2aedf47277f25c043dedb57ac9,PodSandboxId:9b1088fee8bbcb77487811e396dad3a1bb41ea4dedba7b064136fdb318507f69,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb4214
6a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_EXITED,CreatedAt:1740398573146356887,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d9127c8-8f6a-4b86-9f26-8c0980ad15d2,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63006476ab53473ea603f89c4cb385ad210beebdc1de53ba608c2d9fb58005ff,PodSandboxId:19e8ba8c7c034a42efb9eae76dd265e8b5d96b91743ef6406e8d800ddf0e5044,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1740398569547793415,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-czfq4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e2878006-ee1b-48e7-964d-c5993adca345,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e0d9f4c44360e63518674162dcb00f459662a02f9b2d91ff6fafd6411f85b9c,PodSandboxId:3c0f2165abe7235fcb54a9ad9ed76252df96d69f720d9bf7b9f06a7b1e1097b1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1740398569426301329,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-wppmd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3ac8a794-0594-4f52-9913-0be11fa69de1,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51f9fd7f6a836ad438b4303de06bb332627cb14c5703aa02db6e17745d264375,PodSandboxId:4781811988f436ec4e820e183ec3732fc2ca2d40209cd534a6c8c3e6352021e2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79
f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1740398551768834291,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-kgtx5,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: b782d82c-5693-43f8-b6b5-6dd9468b7267,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402f4126fb4e53ed9cf1a8facb6f78763940b3ad887da2396d0442305c406748,PodSandboxId:714b63c4da11269424649ce6b9eae7ce04f730147a8a436c67e25ae50e0cfb85,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device
-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1740398534766061270,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-nfjdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ecaf333-99d2-4202-8da4-1c45a0e08bf6,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:534fcc6385a7f177784024250ff8f09d2c667f86c20880038f4646cf25c78101,PodSandboxId:1fcfd871af7dd31aa22c5f72fac0574f185a15b6aad7591074a0cec82ba5e1cb,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&Im
ageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1740398515960380242,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2aab4809-4155-42d4-be8d-47e92ad19bbb,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0b4badb07a579593b1352ddb4cffbda05a11ec6e7a5a2d2c58d2e221627cb49,PodSandboxId:0d9
bbbcc6be1d7367b5371c4d6922094325db5e99ca0532d0dc8a7d62d60cd8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1740398505161794402,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12c0c7f0-a2fe-4783-84cf-9aeef0a26d03,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69f3da9381e66b7eb8c071f0e1423d233bc845c4370607b5d40ee2452887bf98,PodSandboxId:6118d49b1375c91
4854a50fa84c60bc485c0822ad1169cb05790d60ce1e3130e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1740398501452682964,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-whkc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a1bb96b-2f4c-4069-8026-f326ae12884a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa1d1797df1b0c6bf9c9f50b997710be61d8f9c576ce4a30e9eee4f841fec9f2,PodSandboxId:bcddd30dd8203d30fdebc6b3d16b176d48d482dff580b84b341095efad9bc7c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1740398497423956573,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xjthf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90a81b91-f36e-497c-b72a-a9e751c4aaf4,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19575553b86ddf85e9c9ae196416ddeac77c7e21eeebcf10393bb6f0b90b1032,PodSandboxId:fc5dd696e09420ef6af723f262dce8f679d24c99490601b34029c7bab066efb8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1740398486898704449,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-641952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69666b73e7b9af958f3b3fd5678b8158,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:0c67b7530dae876fc6586a506fe5f5af22b581d83f734582e9a3e2a6726467b5,PodSandboxId:4842d18a7dfce0b76c1138131c894417b7efcf205708cadd5d0ddaa9b0780118,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1740398486845484788,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-641952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 441902e74881bb00f51f81a209c3b6c9,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:9c2ad9352e4c845e5bd85f50a5f442d656492f7034f2cbbf2977309146c590bd,PodSandboxId:1c568402605e42b19701bf87f34eaf7c715a753a18177ba7086a4379fb8b7fb9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1740398486810158938,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-641952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbdbb33b2e764fa091d66535c9d2b2b7,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8871773c8ca2a
1cb99abf2c228e0e647c2cc66df38323b597b5e03d7878ab691,PodSandboxId:41a8edbd7b39e3d3af8cc2eaa0f7d2a456579fd890a7fb4c67fb12239c3cd9b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1740398486813619706,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-641952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ec3b31462ab5c36c1244f199298bdb4,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-c
ollector/interceptors.go:74" id=57e131a5-5efc-462d-aaff-7c722087381e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	46637b048bf9d       a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824                                                             9 seconds ago       Exited              helper-pod                0                   55bc07ac82a00       helper-pod-delete-pvc-cd8fcaca-bd54-49a6-9e22-383da91e5d0a
	a1d8570f93e6a       docker.io/library/busybox@sha256:7a5342b7662db8de99e045a2b47b889c5701b8dde0ce5ae3f1577bf57a15ed40                            13 seconds ago      Exited              busybox                   0                   9bcff00971a0c       test-local-path
	63a4b33e94f58       docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591                              2 minutes ago       Running             nginx                     0                   4a8330fa9ac4a       nginx
	f7125aea13d74       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   7daad2c22c8c8       busybox
	798959adc4dd4       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             4 minutes ago       Running             controller                0                   0420610b011de       ingress-nginx-controller-56d7c84fd4-ztj5d
	fb566ca36535a       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0             4 minutes ago       Exited              csi-attacher              0                   9b1088fee8bbc       csi-hostpath-attacher-0
	63006476ab534       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   4 minutes ago       Exited              patch                     0                   19e8ba8c7c034       ingress-nginx-admission-patch-czfq4
	6e0d9f4c44360       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   4 minutes ago       Exited              create                    0                   3c0f2165abe72       ingress-nginx-admission-create-wppmd
	51f9fd7f6a836       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             4 minutes ago       Running             local-path-provisioner    0                   4781811988f43       local-path-provisioner-76f89f99b5-kgtx5
	402f4126fb4e5       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     5 minutes ago       Running             amd-gpu-device-plugin     0                   714b63c4da112       amd-gpu-device-plugin-nfjdc
	534fcc6385a7f       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             5 minutes ago       Running             minikube-ingress-dns      0                   1fcfd871af7dd       kube-ingress-dns-minikube
	e0b4badb07a57       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   0d9bbbcc6be1d       storage-provisioner
	69f3da9381e66       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             5 minutes ago       Running             coredns                   0                   6118d49b1375c       coredns-668d6bf9bc-whkc9
	aa1d1797df1b0       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5                                                             5 minutes ago       Running             kube-proxy                0                   bcddd30dd8203       kube-proxy-xjthf
	19575553b86dd       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                             5 minutes ago       Running             etcd                      0                   fc5dd696e0942       etcd-addons-641952
	0c67b7530dae8       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef                                                             5 minutes ago       Running             kube-apiserver            0                   4842d18a7dfce       kube-apiserver-addons-641952
	8871773c8ca2a       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389                                                             5 minutes ago       Running             kube-controller-manager   0                   41a8edbd7b39e       kube-controller-manager-addons-641952
	9c2ad9352e4c8       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d                                                             5 minutes ago       Running             kube-scheduler            0                   1c568402605e4       kube-scheduler-addons-641952
	
	
	==> coredns [69f3da9381e66b7eb8c071f0e1423d233bc845c4370607b5d40ee2452887bf98] <==
	[INFO] 10.244.0.22:34567 - 1696 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000237001s
	[INFO] 10.244.0.22:52166 - 53923 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000131853s
	[INFO] 10.244.0.22:34567 - 25074 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000127402s
	[INFO] 10.244.0.22:52166 - 42771 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000184697s
	[INFO] 10.244.0.22:34567 - 19056 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000222974s
	[INFO] 10.244.0.22:52166 - 47644 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00010778s
	[INFO] 10.244.0.22:34567 - 19405 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000071539s
	[INFO] 10.244.0.22:52166 - 1995 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000118569s
	[INFO] 10.244.0.22:34567 - 34227 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000063243s
	[INFO] 10.244.0.22:52166 - 35063 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00009722s
	[INFO] 10.244.0.22:34567 - 23745 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000058805s
	[INFO] 10.244.0.22:35890 - 61379 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000156511s
	[INFO] 10.244.0.22:44300 - 42577 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000146338s
	[INFO] 10.244.0.22:35890 - 39730 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000126525s
	[INFO] 10.244.0.22:44300 - 24500 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000188769s
	[INFO] 10.244.0.22:35890 - 10094 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000101631s
	[INFO] 10.244.0.22:35890 - 35989 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000062966s
	[INFO] 10.244.0.22:35890 - 26387 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.001142446s
	[INFO] 10.244.0.22:44300 - 43474 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.002341698s
	[INFO] 10.244.0.22:35890 - 48509 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000127787s
	[INFO] 10.244.0.22:35890 - 31561 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000122273s
	[INFO] 10.244.0.22:44300 - 57125 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.004490675s
	[INFO] 10.244.0.22:44300 - 10081 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000107894s
	[INFO] 10.244.0.22:44300 - 51686 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000130862s
	[INFO] 10.244.0.22:44300 - 11311 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000165787s
	
	
	==> describe nodes <==
	Name:               addons-641952
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-641952
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b76650f53499dbb51707efa4a87e94b72d747650
	                    minikube.k8s.io/name=addons-641952
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_24T12_01_33_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-641952
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Feb 2025 12:01:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-641952
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Feb 2025 12:07:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Feb 2025 12:07:09 +0000   Mon, 24 Feb 2025 12:01:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Feb 2025 12:07:09 +0000   Mon, 24 Feb 2025 12:01:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Feb 2025 12:07:09 +0000   Mon, 24 Feb 2025 12:01:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Feb 2025 12:07:09 +0000   Mon, 24 Feb 2025 12:01:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.150
	  Hostname:    addons-641952
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 5c4fe16978fa4b7283d0bfc312180b59
	  System UUID:                5c4fe169-78fa-4b72-83d0-bfc312180b59
	  Boot ID:                    5da4af36-3d05-4fd0-bd50-5bda5214ff20
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m6s
	  default                     hello-world-app-7d9564db4-nvfrp              0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-ztj5d    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m32s
	  kube-system                 amd-gpu-device-plugin-nfjdc                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m39s
	  kube-system                 coredns-668d6bf9bc-whkc9                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m41s
	  kube-system                 etcd-addons-641952                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m48s
	  kube-system                 kube-apiserver-addons-641952                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m46s
	  kube-system                 kube-controller-manager-addons-641952        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m46s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m37s
	  kube-system                 kube-proxy-xjthf                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m42s
	  kube-system                 kube-scheduler-addons-641952                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m46s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m35s
	  local-path-storage          local-path-provisioner-76f89f99b5-kgtx5      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m41s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m52s (x8 over 5m52s)  kubelet          Node addons-641952 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m52s (x8 over 5m52s)  kubelet          Node addons-641952 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m52s (x7 over 5m52s)  kubelet          Node addons-641952 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m46s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m46s                  kubelet          Node addons-641952 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m46s                  kubelet          Node addons-641952 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m46s                  kubelet          Node addons-641952 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m45s                  kubelet          Node addons-641952 status is now: NodeReady
	  Normal  RegisteredNode           5m42s                  node-controller  Node addons-641952 event: Registered Node addons-641952 in Controller
	
	
	==> dmesg <==
	[  +6.187639] systemd-fstab-generator[1220]: Ignoring "noauto" option for root device
	[  +0.100669] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.026311] kauditd_printk_skb: 26 callbacks suppressed
	[  +0.392168] systemd-fstab-generator[1518]: Ignoring "noauto" option for root device
	[  +4.635599] kauditd_printk_skb: 99 callbacks suppressed
	[  +5.020424] kauditd_printk_skb: 128 callbacks suppressed
	[  +5.660134] kauditd_printk_skb: 84 callbacks suppressed
	[Feb24 12:02] kauditd_printk_skb: 15 callbacks suppressed
	[ +20.025520] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.264459] kauditd_printk_skb: 4 callbacks suppressed
	[ +11.144591] kauditd_printk_skb: 42 callbacks suppressed
	[  +5.056839] kauditd_printk_skb: 46 callbacks suppressed
	[Feb24 12:03] kauditd_printk_skb: 11 callbacks suppressed
	[  +6.500915] kauditd_printk_skb: 16 callbacks suppressed
	[Feb24 12:04] kauditd_printk_skb: 7 callbacks suppressed
	[ +13.625848] kauditd_printk_skb: 9 callbacks suppressed
	[ +11.070503] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.098302] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.183736] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.282971] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.938426] kauditd_printk_skb: 54 callbacks suppressed
	[Feb24 12:05] kauditd_printk_skb: 19 callbacks suppressed
	[Feb24 12:07] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.403940] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.623989] kauditd_printk_skb: 35 callbacks suppressed
	
	
	==> etcd [19575553b86ddf85e9c9ae196416ddeac77c7e21eeebcf10393bb6f0b90b1032] <==
	{"level":"info","ts":"2025-02-24T12:04:51.450561Z","caller":"traceutil/trace.go:171","msg":"trace[10940012] transaction","detail":"{read_only:false; response_revision:1529; number_of_response:1; }","duration":"232.187772ms","start":"2025-02-24T12:04:51.218357Z","end":"2025-02-24T12:04:51.450545Z","steps":["trace[10940012] 'process raft request'  (duration: 231.681212ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-24T12:05:01.984719Z","caller":"traceutil/trace.go:171","msg":"trace[113228979] linearizableReadLoop","detail":"{readStateIndex:1724; appliedIndex:1723; }","duration":"406.079169ms","start":"2025-02-24T12:05:01.578616Z","end":"2025-02-24T12:05:01.984695Z","steps":["trace[113228979] 'read index received'  (duration: 405.950908ms)","trace[113228979] 'applied index is now lower than readState.Index'  (duration: 127.924µs)"],"step_count":2}
	{"level":"info","ts":"2025-02-24T12:05:01.984973Z","caller":"traceutil/trace.go:171","msg":"trace[1002301626] transaction","detail":"{read_only:false; response_revision:1654; number_of_response:1; }","duration":"410.329394ms","start":"2025-02-24T12:05:01.574634Z","end":"2025-02-24T12:05:01.984963Z","steps":["trace[1002301626] 'process raft request'  (duration: 409.977701ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-24T12:05:01.985153Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-24T12:05:01.574618Z","time spent":"410.376077ms","remote":"127.0.0.1:37832","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":538,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1601 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:451 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"warn","ts":"2025-02-24T12:05:01.985339Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"406.719087ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.150\" limit:1 ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2025-02-24T12:05:01.985363Z","caller":"traceutil/trace.go:171","msg":"trace[1051493572] range","detail":"{range_begin:/registry/masterleases/192.168.39.150; range_end:; response_count:1; response_revision:1654; }","duration":"406.764718ms","start":"2025-02-24T12:05:01.578592Z","end":"2025-02-24T12:05:01.985357Z","steps":["trace[1051493572] 'agreement among raft nodes before linearized reading'  (duration: 406.684773ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-24T12:05:01.985380Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-24T12:05:01.578580Z","time spent":"406.79656ms","remote":"127.0.0.1:37578","response type":"/etcdserverpb.KV/Range","request count":0,"request size":41,"response count":1,"response size":158,"request content":"key:\"/registry/masterleases/192.168.39.150\" limit:1 "}
	{"level":"warn","ts":"2025-02-24T12:05:01.985742Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"227.437503ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-24T12:05:01.985764Z","caller":"traceutil/trace.go:171","msg":"trace[495664908] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1654; }","duration":"227.482515ms","start":"2025-02-24T12:05:01.758275Z","end":"2025-02-24T12:05:01.985757Z","steps":["trace[495664908] 'agreement among raft nodes before linearized reading'  (duration: 227.410128ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-24T12:05:01.985913Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"179.383077ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-24T12:05:01.985927Z","caller":"traceutil/trace.go:171","msg":"trace[1829276447] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1654; }","duration":"179.418601ms","start":"2025-02-24T12:05:01.806504Z","end":"2025-02-24T12:05:01.985922Z","steps":["trace[1829276447] 'agreement among raft nodes before linearized reading'  (duration: 179.394239ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-24T12:05:01.985992Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"226.102554ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-24T12:05:01.986008Z","caller":"traceutil/trace.go:171","msg":"trace[1271202236] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1654; }","duration":"226.130184ms","start":"2025-02-24T12:05:01.759870Z","end":"2025-02-24T12:05:01.986000Z","steps":["trace[1271202236] 'agreement among raft nodes before linearized reading'  (duration: 226.111253ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-24T12:05:33.068338Z","caller":"traceutil/trace.go:171","msg":"trace[490453802] linearizableReadLoop","detail":"{readStateIndex:1790; appliedIndex:1789; }","duration":"309.25701ms","start":"2025-02-24T12:05:32.759068Z","end":"2025-02-24T12:05:33.068325Z","steps":["trace[490453802] 'read index received'  (duration: 309.118628ms)","trace[490453802] 'applied index is now lower than readState.Index'  (duration: 137.997µs)"],"step_count":2}
	{"level":"info","ts":"2025-02-24T12:05:33.068726Z","caller":"traceutil/trace.go:171","msg":"trace[2048205793] transaction","detail":"{read_only:false; response_revision:1713; number_of_response:1; }","duration":"321.57275ms","start":"2025-02-24T12:05:32.747141Z","end":"2025-02-24T12:05:33.068713Z","steps":["trace[2048205793] 'process raft request'  (duration: 321.088896ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-24T12:05:33.068818Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-24T12:05:32.747127Z","time spent":"321.641718ms","remote":"127.0.0.1:37832","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1705 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:421 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"warn","ts":"2025-02-24T12:05:33.068972Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"309.898501ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-24T12:05:33.069012Z","caller":"traceutil/trace.go:171","msg":"trace[1405039990] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1713; }","duration":"309.961587ms","start":"2025-02-24T12:05:32.759044Z","end":"2025-02-24T12:05:33.069005Z","steps":["trace[1405039990] 'agreement among raft nodes before linearized reading'  (duration: 309.894308ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-24T12:05:33.069032Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-24T12:05:32.759029Z","time spent":"309.998236ms","remote":"127.0.0.1:37754","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-02-24T12:05:33.069153Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"263.614801ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-24T12:05:33.069188Z","caller":"traceutil/trace.go:171","msg":"trace[1576252142] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1713; }","duration":"263.668476ms","start":"2025-02-24T12:05:32.805514Z","end":"2025-02-24T12:05:33.069182Z","steps":["trace[1576252142] 'agreement among raft nodes before linearized reading'  (duration: 263.622492ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-24T12:07:15.735799Z","caller":"traceutil/trace.go:171","msg":"trace[862177915] linearizableReadLoop","detail":"{readStateIndex:2119; appliedIndex:2118; }","duration":"103.420008ms","start":"2025-02-24T12:07:15.632355Z","end":"2025-02-24T12:07:15.735775Z","steps":["trace[862177915] 'read index received'  (duration: 103.306865ms)","trace[862177915] 'applied index is now lower than readState.Index'  (duration: 112.741µs)"],"step_count":2}
	{"level":"info","ts":"2025-02-24T12:07:15.735921Z","caller":"traceutil/trace.go:171","msg":"trace[379038802] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2014; }","duration":"132.684172ms","start":"2025-02-24T12:07:15.603231Z","end":"2025-02-24T12:07:15.735915Z","steps":["trace[379038802] 'process raft request'  (duration: 132.471773ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-24T12:07:15.736324Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.914359ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csidrivers/\" range_end:\"/registry/csidrivers0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-24T12:07:15.736355Z","caller":"traceutil/trace.go:171","msg":"trace[817813830] range","detail":"{range_begin:/registry/csidrivers/; range_end:/registry/csidrivers0; response_count:0; response_revision:2014; }","duration":"104.021707ms","start":"2025-02-24T12:07:15.632325Z","end":"2025-02-24T12:07:15.736347Z","steps":["trace[817813830] 'agreement among raft nodes before linearized reading'  (duration: 103.915603ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:07:19 up 6 min,  0 users,  load average: 0.63, 1.02, 0.58
	Linux addons-641952 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0c67b7530dae876fc6586a506fe5f5af22b581d83f734582e9a3e2a6726467b5] <==
	E0224 12:02:35.406240       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.119.132:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.119.132:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.119.132:443: connect: connection refused" logger="UnhandledError"
	I0224 12:02:35.489053       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0224 12:04:21.888779       1 conn.go:339] Error on socket receive: read tcp 192.168.39.150:8443->192.168.39.1:34294: use of closed network connection
	E0224 12:04:22.098729       1 conn.go:339] Error on socket receive: read tcp 192.168.39.150:8443->192.168.39.1:34324: use of closed network connection
	I0224 12:04:36.412911       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0224 12:04:43.024606       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0224 12:04:44.065512       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0224 12:04:53.802330       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0224 12:04:54.057328       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.227.238"}
	I0224 12:04:55.640285       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.228.247"}
	I0224 12:04:58.987314       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0224 12:07:13.873256       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0224 12:07:13.873343       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0224 12:07:13.901028       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0224 12:07:13.901275       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0224 12:07:13.948502       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0224 12:07:13.948577       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0224 12:07:13.965230       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0224 12:07:13.965344       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0224 12:07:13.981125       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0224 12:07:13.981309       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0224 12:07:14.967603       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0224 12:07:14.981564       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0224 12:07:15.021882       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	I0224 12:07:17.380316       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.238.252"}
	
	
	==> kube-controller-manager [8871773c8ca2a1cb99abf2c228e0e647c2cc66df38323b597b5e03d7878ab691] <==
	E0224 12:07:16.058724       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0224 12:07:16.526753       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0224 12:07:16.528154       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0224 12:07:16.529170       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0224 12:07:16.529245       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0224 12:07:16.538794       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0224 12:07:16.539800       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0224 12:07:16.540540       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0224 12:07:16.540597       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0224 12:07:17.246508       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="46.545238ms"
	I0224 12:07:17.271773       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="24.451805ms"
	I0224 12:07:17.272105       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="66.929µs"
	I0224 12:07:17.279155       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="41.291µs"
	W0224 12:07:18.074735       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0224 12:07:18.078388       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W0224 12:07:18.082917       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0224 12:07:18.083085       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0224 12:07:18.200091       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0224 12:07:18.201197       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0224 12:07:18.202513       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0224 12:07:18.202545       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0224 12:07:18.909335       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0224 12:07:18.910430       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0224 12:07:18.911194       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0224 12:07:18.911258       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [aa1d1797df1b0c6bf9c9f50b997710be61d8f9c576ce4a30e9eee4f841fec9f2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0224 12:01:37.642325       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0224 12:01:37.659838       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.150"]
	E0224 12:01:37.659967       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0224 12:01:37.881620       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0224 12:01:37.883729       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0224 12:01:37.883754       1 server_linux.go:170] "Using iptables Proxier"
	I0224 12:01:37.914996       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0224 12:01:37.915304       1 server.go:497] "Version info" version="v1.32.2"
	I0224 12:01:37.920334       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0224 12:01:37.922104       1 config.go:199] "Starting service config controller"
	I0224 12:01:37.922125       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0224 12:01:37.922149       1 config.go:105] "Starting endpoint slice config controller"
	I0224 12:01:37.922152       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0224 12:01:37.926511       1 config.go:329] "Starting node config controller"
	I0224 12:01:37.926540       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0224 12:01:38.023044       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0224 12:01:38.023089       1 shared_informer.go:320] Caches are synced for service config
	I0224 12:01:38.027440       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9c2ad9352e4c845e5bd85f50a5f442d656492f7034f2cbbf2977309146c590bd] <==
	W0224 12:01:30.523493       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0224 12:01:30.523552       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0224 12:01:30.545475       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0224 12:01:30.545525       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0224 12:01:30.560176       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0224 12:01:30.560229       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0224 12:01:30.670974       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0224 12:01:30.671026       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0224 12:01:30.703763       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0224 12:01:30.703871       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0224 12:01:30.770730       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0224 12:01:30.770817       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0224 12:01:30.821212       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0224 12:01:30.821243       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0224 12:01:30.865060       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0224 12:01:30.865113       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0224 12:01:30.925992       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0224 12:01:30.926061       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0224 12:01:30.949244       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0224 12:01:30.949297       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0224 12:01:30.958540       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0224 12:01:30.958592       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0224 12:01:31.073277       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0224 12:01:31.073331       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0224 12:01:32.667208       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 24 12:07:16 addons-641952 kubelet[1227]: I0224 12:07:16.804811    1227 scope.go:117] "RemoveContainer" containerID="128b4e4ac3e7b3fe70b6e49a6bd3ebf69b84362cac67c857978260881b1b4fe6"
	Feb 24 12:07:16 addons-641952 kubelet[1227]: I0224 12:07:16.805258    1227 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"128b4e4ac3e7b3fe70b6e49a6bd3ebf69b84362cac67c857978260881b1b4fe6"} err="failed to get container status \"128b4e4ac3e7b3fe70b6e49a6bd3ebf69b84362cac67c857978260881b1b4fe6\": rpc error: code = NotFound desc = could not find container \"128b4e4ac3e7b3fe70b6e49a6bd3ebf69b84362cac67c857978260881b1b4fe6\": container with ID starting with 128b4e4ac3e7b3fe70b6e49a6bd3ebf69b84362cac67c857978260881b1b4fe6 not found: ID does not exist"
	Feb 24 12:07:16 addons-641952 kubelet[1227]: I0224 12:07:16.805274    1227 scope.go:117] "RemoveContainer" containerID="f28474c13bcacce84ad74ae0d75f25afe49760b50fbd728658be89dab762c4dc"
	Feb 24 12:07:16 addons-641952 kubelet[1227]: I0224 12:07:16.805818    1227 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f28474c13bcacce84ad74ae0d75f25afe49760b50fbd728658be89dab762c4dc"} err="failed to get container status \"f28474c13bcacce84ad74ae0d75f25afe49760b50fbd728658be89dab762c4dc\": rpc error: code = NotFound desc = could not find container \"f28474c13bcacce84ad74ae0d75f25afe49760b50fbd728658be89dab762c4dc\": container with ID starting with f28474c13bcacce84ad74ae0d75f25afe49760b50fbd728658be89dab762c4dc not found: ID does not exist"
	Feb 24 12:07:16 addons-641952 kubelet[1227]: I0224 12:07:16.805834    1227 scope.go:117] "RemoveContainer" containerID="46e4a7ef751bc217ca9313234ea979d9b8061ac7fc245df836b930136efd8736"
	Feb 24 12:07:16 addons-641952 kubelet[1227]: I0224 12:07:16.806178    1227 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46e4a7ef751bc217ca9313234ea979d9b8061ac7fc245df836b930136efd8736"} err="failed to get container status \"46e4a7ef751bc217ca9313234ea979d9b8061ac7fc245df836b930136efd8736\": rpc error: code = NotFound desc = could not find container \"46e4a7ef751bc217ca9313234ea979d9b8061ac7fc245df836b930136efd8736\": container with ID starting with 46e4a7ef751bc217ca9313234ea979d9b8061ac7fc245df836b930136efd8736 not found: ID does not exist"
	Feb 24 12:07:16 addons-641952 kubelet[1227]: I0224 12:07:16.806192    1227 scope.go:117] "RemoveContainer" containerID="891ada322da0eb84512c6bb0285f303ae84aa30f3c9b1bd4c686b7a5ffff1231"
	Feb 24 12:07:16 addons-641952 kubelet[1227]: I0224 12:07:16.922660    1227 scope.go:117] "RemoveContainer" containerID="891ada322da0eb84512c6bb0285f303ae84aa30f3c9b1bd4c686b7a5ffff1231"
	Feb 24 12:07:16 addons-641952 kubelet[1227]: E0224 12:07:16.923325    1227 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"891ada322da0eb84512c6bb0285f303ae84aa30f3c9b1bd4c686b7a5ffff1231\": container with ID starting with 891ada322da0eb84512c6bb0285f303ae84aa30f3c9b1bd4c686b7a5ffff1231 not found: ID does not exist" containerID="891ada322da0eb84512c6bb0285f303ae84aa30f3c9b1bd4c686b7a5ffff1231"
	Feb 24 12:07:16 addons-641952 kubelet[1227]: I0224 12:07:16.923373    1227 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"891ada322da0eb84512c6bb0285f303ae84aa30f3c9b1bd4c686b7a5ffff1231"} err="failed to get container status \"891ada322da0eb84512c6bb0285f303ae84aa30f3c9b1bd4c686b7a5ffff1231\": rpc error: code = NotFound desc = could not find container \"891ada322da0eb84512c6bb0285f303ae84aa30f3c9b1bd4c686b7a5ffff1231\": container with ID starting with 891ada322da0eb84512c6bb0285f303ae84aa30f3c9b1bd4c686b7a5ffff1231 not found: ID does not exist"
	Feb 24 12:07:17 addons-641952 kubelet[1227]: I0224 12:07:17.241752    1227 memory_manager.go:355] "RemoveStaleState removing state" podUID="d9f95591-d226-40c0-a05b-280d3df8196b" containerName="liveness-probe"
	Feb 24 12:07:17 addons-641952 kubelet[1227]: I0224 12:07:17.242083    1227 memory_manager.go:355] "RemoveStaleState removing state" podUID="d9f95591-d226-40c0-a05b-280d3df8196b" containerName="csi-provisioner"
	Feb 24 12:07:17 addons-641952 kubelet[1227]: I0224 12:07:17.242125    1227 memory_manager.go:355] "RemoveStaleState removing state" podUID="cda1d6b9-eb2a-4868-9e8d-c3e12704c446" containerName="task-pv-container"
	Feb 24 12:07:17 addons-641952 kubelet[1227]: I0224 12:07:17.242156    1227 memory_manager.go:355] "RemoveStaleState removing state" podUID="c4cd93ee-8c13-4e37-b55c-1f354cee0c0a" containerName="volume-snapshot-controller"
	Feb 24 12:07:17 addons-641952 kubelet[1227]: I0224 12:07:17.242198    1227 memory_manager.go:355] "RemoveStaleState removing state" podUID="4d9127c8-8f6a-4b86-9f26-8c0980ad15d2" containerName="csi-attacher"
	Feb 24 12:07:17 addons-641952 kubelet[1227]: I0224 12:07:17.242230    1227 memory_manager.go:355] "RemoveStaleState removing state" podUID="d9f95591-d226-40c0-a05b-280d3df8196b" containerName="csi-snapshotter"
	Feb 24 12:07:17 addons-641952 kubelet[1227]: I0224 12:07:17.242271    1227 memory_manager.go:355] "RemoveStaleState removing state" podUID="d9f95591-d226-40c0-a05b-280d3df8196b" containerName="node-driver-registrar"
	Feb 24 12:07:17 addons-641952 kubelet[1227]: I0224 12:07:17.242301    1227 memory_manager.go:355] "RemoveStaleState removing state" podUID="d5c67ff0-12f6-4f90-b72d-b5b97d137f55" containerName="headlamp"
	Feb 24 12:07:17 addons-641952 kubelet[1227]: I0224 12:07:17.242332    1227 memory_manager.go:355] "RemoveStaleState removing state" podUID="c258058d-9590-4471-aa67-f700fe27369c" containerName="csi-resizer"
	Feb 24 12:07:17 addons-641952 kubelet[1227]: I0224 12:07:17.242367    1227 memory_manager.go:355] "RemoveStaleState removing state" podUID="d9f95591-d226-40c0-a05b-280d3df8196b" containerName="hostpath"
	Feb 24 12:07:17 addons-641952 kubelet[1227]: I0224 12:07:17.242480    1227 memory_manager.go:355] "RemoveStaleState removing state" podUID="04269895-727a-436f-a780-44dc89844082" containerName="volume-snapshot-controller"
	Feb 24 12:07:17 addons-641952 kubelet[1227]: I0224 12:07:17.242518    1227 memory_manager.go:355] "RemoveStaleState removing state" podUID="d9f95591-d226-40c0-a05b-280d3df8196b" containerName="csi-external-health-monitor-controller"
	Feb 24 12:07:17 addons-641952 kubelet[1227]: I0224 12:07:17.242550    1227 memory_manager.go:355] "RemoveStaleState removing state" podUID="f63e0054-265b-4ddb-915b-17bf3efe01a3" containerName="helper-pod"
	Feb 24 12:07:17 addons-641952 kubelet[1227]: I0224 12:07:17.423674    1227 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rdrg\" (UniqueName: \"kubernetes.io/projected/2956af2e-42b0-4231-9b3a-e00bb389b404-kube-api-access-4rdrg\") pod \"hello-world-app-7d9564db4-nvfrp\" (UID: \"2956af2e-42b0-4231-9b3a-e00bb389b404\") " pod="default/hello-world-app-7d9564db4-nvfrp"
	Feb 24 12:07:18 addons-641952 kubelet[1227]: I0224 12:07:18.611028    1227 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d9127c8-8f6a-4b86-9f26-8c0980ad15d2" path="/var/lib/kubelet/pods/4d9127c8-8f6a-4b86-9f26-8c0980ad15d2/volumes"
	
	
	==> storage-provisioner [e0b4badb07a579593b1352ddb4cffbda05a11ec6e7a5a2d2c58d2e221627cb49] <==
	I0224 12:01:46.286175       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0224 12:01:46.427154       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0224 12:01:46.427215       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0224 12:01:46.487603       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0224 12:01:46.487772       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-641952_0f2c0be4-70a9-4d61-99d4-fa28d92e2cc6!
	I0224 12:01:46.511836       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9ee5be3f-11e0-45ab-99d1-81b378dad9a2", APIVersion:"v1", ResourceVersion:"687", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-641952_0f2c0be4-70a9-4d61-99d4-fa28d92e2cc6 became leader
	I0224 12:01:46.958506       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-641952_0f2c0be4-70a9-4d61-99d4-fa28d92e2cc6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-641952 -n addons-641952
helpers_test.go:261: (dbg) Run:  kubectl --context addons-641952 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-7d9564db4-nvfrp ingress-nginx-admission-create-wppmd ingress-nginx-admission-patch-czfq4
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-641952 describe pod hello-world-app-7d9564db4-nvfrp ingress-nginx-admission-create-wppmd ingress-nginx-admission-patch-czfq4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-641952 describe pod hello-world-app-7d9564db4-nvfrp ingress-nginx-admission-create-wppmd ingress-nginx-admission-patch-czfq4: exit status 1 (71.394432ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-7d9564db4-nvfrp
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-641952/192.168.39.150
	Start Time:       Mon, 24 Feb 2025 12:07:17 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=7d9564db4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-7d9564db4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4rdrg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-4rdrg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-7d9564db4-nvfrp to addons-641952
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-wppmd" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-czfq4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-641952 describe pod hello-world-app-7d9564db4-nvfrp ingress-nginx-admission-create-wppmd ingress-nginx-admission-patch-czfq4: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-641952 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-641952 addons disable ingress-dns --alsologtostderr -v=1: (1.276715149s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-641952 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-641952 addons disable ingress --alsologtostderr -v=1: (7.776838517s)
--- FAIL: TestAddons/parallel/Ingress (155.56s)

                                                
                                    
x
+
TestPreload (298.22s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-993368 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0224 13:01:46.849627  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/functional-892991/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-993368 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m8.475213583s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-993368 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-993368 image pull gcr.io/k8s-minikube/busybox: (3.621085994s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-993368
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-993368: (1m30.9826784s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-993368 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0224 13:04:12.768814  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-993368 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m12.008672519s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-993368 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2025-02-24 13:05:04.124659332 +0000 UTC m=+3901.593074474
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-993368 -n test-preload-993368
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-993368 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-993368 logs -n 25: (1.188490483s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-397129 ssh -n                                                                 | multinode-397129     | jenkins | v1.35.0 | 24 Feb 25 12:47 UTC | 24 Feb 25 12:47 UTC |
	|         | multinode-397129-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-397129 ssh -n multinode-397129 sudo cat                                       | multinode-397129     | jenkins | v1.35.0 | 24 Feb 25 12:47 UTC | 24 Feb 25 12:47 UTC |
	|         | /home/docker/cp-test_multinode-397129-m03_multinode-397129.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-397129 cp multinode-397129-m03:/home/docker/cp-test.txt                       | multinode-397129     | jenkins | v1.35.0 | 24 Feb 25 12:47 UTC | 24 Feb 25 12:47 UTC |
	|         | multinode-397129-m02:/home/docker/cp-test_multinode-397129-m03_multinode-397129-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-397129 ssh -n                                                                 | multinode-397129     | jenkins | v1.35.0 | 24 Feb 25 12:47 UTC | 24 Feb 25 12:47 UTC |
	|         | multinode-397129-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-397129 ssh -n multinode-397129-m02 sudo cat                                   | multinode-397129     | jenkins | v1.35.0 | 24 Feb 25 12:47 UTC | 24 Feb 25 12:47 UTC |
	|         | /home/docker/cp-test_multinode-397129-m03_multinode-397129-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-397129 node stop m03                                                          | multinode-397129     | jenkins | v1.35.0 | 24 Feb 25 12:47 UTC | 24 Feb 25 12:47 UTC |
	| node    | multinode-397129 node start                                                             | multinode-397129     | jenkins | v1.35.0 | 24 Feb 25 12:47 UTC | 24 Feb 25 12:48 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-397129                                                                | multinode-397129     | jenkins | v1.35.0 | 24 Feb 25 12:48 UTC |                     |
	| stop    | -p multinode-397129                                                                     | multinode-397129     | jenkins | v1.35.0 | 24 Feb 25 12:48 UTC | 24 Feb 25 12:51 UTC |
	| start   | -p multinode-397129                                                                     | multinode-397129     | jenkins | v1.35.0 | 24 Feb 25 12:51 UTC | 24 Feb 25 12:54 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-397129                                                                | multinode-397129     | jenkins | v1.35.0 | 24 Feb 25 12:54 UTC |                     |
	| node    | multinode-397129 node delete                                                            | multinode-397129     | jenkins | v1.35.0 | 24 Feb 25 12:54 UTC | 24 Feb 25 12:54 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-397129 stop                                                                   | multinode-397129     | jenkins | v1.35.0 | 24 Feb 25 12:54 UTC | 24 Feb 25 12:57 UTC |
	| start   | -p multinode-397129                                                                     | multinode-397129     | jenkins | v1.35.0 | 24 Feb 25 12:57 UTC | 24 Feb 25 12:59 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-397129                                                                | multinode-397129     | jenkins | v1.35.0 | 24 Feb 25 12:59 UTC |                     |
	| start   | -p multinode-397129-m02                                                                 | multinode-397129-m02 | jenkins | v1.35.0 | 24 Feb 25 12:59 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-397129-m03                                                                 | multinode-397129-m03 | jenkins | v1.35.0 | 24 Feb 25 12:59 UTC | 24 Feb 25 13:00 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-397129                                                                 | multinode-397129     | jenkins | v1.35.0 | 24 Feb 25 13:00 UTC |                     |
	| delete  | -p multinode-397129-m03                                                                 | multinode-397129-m03 | jenkins | v1.35.0 | 24 Feb 25 13:00 UTC | 24 Feb 25 13:00 UTC |
	| delete  | -p multinode-397129                                                                     | multinode-397129     | jenkins | v1.35.0 | 24 Feb 25 13:00 UTC | 24 Feb 25 13:00 UTC |
	| start   | -p test-preload-993368                                                                  | test-preload-993368  | jenkins | v1.35.0 | 24 Feb 25 13:00 UTC | 24 Feb 25 13:02 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-993368 image pull                                                          | test-preload-993368  | jenkins | v1.35.0 | 24 Feb 25 13:02 UTC | 24 Feb 25 13:02 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-993368                                                                  | test-preload-993368  | jenkins | v1.35.0 | 24 Feb 25 13:02 UTC | 24 Feb 25 13:03 UTC |
	| start   | -p test-preload-993368                                                                  | test-preload-993368  | jenkins | v1.35.0 | 24 Feb 25 13:03 UTC | 24 Feb 25 13:05 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-993368 image list                                                          | test-preload-993368  | jenkins | v1.35.0 | 24 Feb 25 13:05 UTC | 24 Feb 25 13:05 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/24 13:03:51
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0224 13:03:51.934526  927487 out.go:345] Setting OutFile to fd 1 ...
	I0224 13:03:51.934633  927487 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 13:03:51.934640  927487 out.go:358] Setting ErrFile to fd 2...
	I0224 13:03:51.934645  927487 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 13:03:51.934893  927487 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-887294/.minikube/bin
	I0224 13:03:51.935465  927487 out.go:352] Setting JSON to false
	I0224 13:03:51.936438  927487 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":9973,"bootTime":1740392259,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0224 13:03:51.936557  927487 start.go:139] virtualization: kvm guest
	I0224 13:03:51.941659  927487 out.go:177] * [test-preload-993368] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0224 13:03:51.943216  927487 out.go:177]   - MINIKUBE_LOCATION=20451
	I0224 13:03:51.943244  927487 notify.go:220] Checking for updates...
	I0224 13:03:51.946295  927487 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 13:03:51.947581  927487 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20451-887294/kubeconfig
	I0224 13:03:51.948791  927487 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-887294/.minikube
	I0224 13:03:51.949961  927487 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0224 13:03:51.951111  927487 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0224 13:03:51.952793  927487 config.go:182] Loaded profile config "test-preload-993368": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0224 13:03:51.953245  927487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:03:51.953327  927487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:03:51.968528  927487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40063
	I0224 13:03:51.969001  927487 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:03:51.969623  927487 main.go:141] libmachine: Using API Version  1
	I0224 13:03:51.969646  927487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:03:51.970002  927487 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:03:51.970202  927487 main.go:141] libmachine: (test-preload-993368) Calling .DriverName
	I0224 13:03:51.972867  927487 out.go:177] * Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	I0224 13:03:51.974424  927487 driver.go:394] Setting default libvirt URI to qemu:///system
	I0224 13:03:51.974782  927487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:03:51.974826  927487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:03:51.989962  927487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37821
	I0224 13:03:51.990428  927487 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:03:51.990964  927487 main.go:141] libmachine: Using API Version  1
	I0224 13:03:51.990987  927487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:03:51.991312  927487 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:03:51.991511  927487 main.go:141] libmachine: (test-preload-993368) Calling .DriverName
	I0224 13:03:52.028630  927487 out.go:177] * Using the kvm2 driver based on existing profile
	I0224 13:03:52.029989  927487 start.go:297] selected driver: kvm2
	I0224 13:03:52.030006  927487 start.go:901] validating driver "kvm2" against &{Name:test-preload-993368 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 Cluster
Name:test-preload-993368 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.199 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0224 13:03:52.030188  927487 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0224 13:03:52.031093  927487 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 13:03:52.031168  927487 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20451-887294/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0224 13:03:52.046684  927487 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0224 13:03:52.047054  927487 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0224 13:03:52.047099  927487 cni.go:84] Creating CNI manager for ""
	I0224 13:03:52.047158  927487 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0224 13:03:52.047214  927487 start.go:340] cluster config:
	{Name:test-preload-993368 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-993368 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.199 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0224 13:03:52.047313  927487 iso.go:125] acquiring lock: {Name:mk57408cca66a96a13d93cda43cdfac6e61aef3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 13:03:52.049835  927487 out.go:177] * Starting "test-preload-993368" primary control-plane node in "test-preload-993368" cluster
	I0224 13:03:52.050819  927487 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0224 13:03:52.639259  927487 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0224 13:03:52.639292  927487 cache.go:56] Caching tarball of preloaded images
	I0224 13:03:52.639488  927487 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0224 13:03:52.641581  927487 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0224 13:03:52.642747  927487 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0224 13:03:52.752362  927487 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/20451-887294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0224 13:04:04.999396  927487 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0224 13:04:04.999499  927487 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20451-887294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0224 13:04:05.864857  927487 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0224 13:04:05.864999  927487 profile.go:143] Saving config to /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/test-preload-993368/config.json ...
	I0224 13:04:05.865240  927487 start.go:360] acquireMachinesLock for test-preload-993368: {Name:mk023761b01bb629a1acd40bc8104cc517b0e15b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0224 13:04:05.865339  927487 start.go:364] duration metric: took 73.349µs to acquireMachinesLock for "test-preload-993368"
	I0224 13:04:05.865358  927487 start.go:96] Skipping create...Using existing machine configuration
	I0224 13:04:05.865366  927487 fix.go:54] fixHost starting: 
	I0224 13:04:05.865665  927487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:04:05.865708  927487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:04:05.880918  927487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36429
	I0224 13:04:05.881536  927487 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:04:05.882119  927487 main.go:141] libmachine: Using API Version  1
	I0224 13:04:05.882157  927487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:04:05.882529  927487 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:04:05.882746  927487 main.go:141] libmachine: (test-preload-993368) Calling .DriverName
	I0224 13:04:05.882910  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetState
	I0224 13:04:05.884685  927487 fix.go:112] recreateIfNeeded on test-preload-993368: state=Stopped err=<nil>
	I0224 13:04:05.884709  927487 main.go:141] libmachine: (test-preload-993368) Calling .DriverName
	W0224 13:04:05.884867  927487 fix.go:138] unexpected machine state, will restart: <nil>
	I0224 13:04:05.887497  927487 out.go:177] * Restarting existing kvm2 VM for "test-preload-993368" ...
	I0224 13:04:05.889272  927487 main.go:141] libmachine: (test-preload-993368) Calling .Start
	I0224 13:04:05.889542  927487 main.go:141] libmachine: (test-preload-993368) starting domain...
	I0224 13:04:05.889576  927487 main.go:141] libmachine: (test-preload-993368) ensuring networks are active...
	I0224 13:04:05.890387  927487 main.go:141] libmachine: (test-preload-993368) Ensuring network default is active
	I0224 13:04:05.890630  927487 main.go:141] libmachine: (test-preload-993368) Ensuring network mk-test-preload-993368 is active
	I0224 13:04:05.890945  927487 main.go:141] libmachine: (test-preload-993368) getting domain XML...
	I0224 13:04:05.891595  927487 main.go:141] libmachine: (test-preload-993368) creating domain...
	I0224 13:04:07.132389  927487 main.go:141] libmachine: (test-preload-993368) waiting for IP...
	I0224 13:04:07.133592  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has defined MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:07.134125  927487 main.go:141] libmachine: (test-preload-993368) DBG | unable to find current IP address of domain test-preload-993368 in network mk-test-preload-993368
	I0224 13:04:07.134157  927487 main.go:141] libmachine: (test-preload-993368) DBG | I0224 13:04:07.134070  927571 retry.go:31] will retry after 228.878885ms: waiting for domain to come up
	I0224 13:04:07.364601  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has defined MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:07.365237  927487 main.go:141] libmachine: (test-preload-993368) DBG | unable to find current IP address of domain test-preload-993368 in network mk-test-preload-993368
	I0224 13:04:07.365265  927487 main.go:141] libmachine: (test-preload-993368) DBG | I0224 13:04:07.365193  927571 retry.go:31] will retry after 244.556897ms: waiting for domain to come up
	I0224 13:04:07.611775  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has defined MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:07.612280  927487 main.go:141] libmachine: (test-preload-993368) DBG | unable to find current IP address of domain test-preload-993368 in network mk-test-preload-993368
	I0224 13:04:07.612350  927487 main.go:141] libmachine: (test-preload-993368) DBG | I0224 13:04:07.612276  927571 retry.go:31] will retry after 460.334716ms: waiting for domain to come up
	I0224 13:04:08.073817  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has defined MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:08.074250  927487 main.go:141] libmachine: (test-preload-993368) DBG | unable to find current IP address of domain test-preload-993368 in network mk-test-preload-993368
	I0224 13:04:08.074310  927487 main.go:141] libmachine: (test-preload-993368) DBG | I0224 13:04:08.074226  927571 retry.go:31] will retry after 547.099031ms: waiting for domain to come up
	I0224 13:04:08.622789  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has defined MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:08.623234  927487 main.go:141] libmachine: (test-preload-993368) DBG | unable to find current IP address of domain test-preload-993368 in network mk-test-preload-993368
	I0224 13:04:08.623265  927487 main.go:141] libmachine: (test-preload-993368) DBG | I0224 13:04:08.623203  927571 retry.go:31] will retry after 698.233981ms: waiting for domain to come up
	I0224 13:04:09.323275  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has defined MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:09.323746  927487 main.go:141] libmachine: (test-preload-993368) DBG | unable to find current IP address of domain test-preload-993368 in network mk-test-preload-993368
	I0224 13:04:09.323776  927487 main.go:141] libmachine: (test-preload-993368) DBG | I0224 13:04:09.323697  927571 retry.go:31] will retry after 771.40555ms: waiting for domain to come up
	I0224 13:04:10.096958  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has defined MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:10.097465  927487 main.go:141] libmachine: (test-preload-993368) DBG | unable to find current IP address of domain test-preload-993368 in network mk-test-preload-993368
	I0224 13:04:10.097513  927487 main.go:141] libmachine: (test-preload-993368) DBG | I0224 13:04:10.097441  927571 retry.go:31] will retry after 834.450308ms: waiting for domain to come up
	I0224 13:04:10.934312  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has defined MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:10.934752  927487 main.go:141] libmachine: (test-preload-993368) DBG | unable to find current IP address of domain test-preload-993368 in network mk-test-preload-993368
	I0224 13:04:10.934798  927487 main.go:141] libmachine: (test-preload-993368) DBG | I0224 13:04:10.934725  927571 retry.go:31] will retry after 1.375727129s: waiting for domain to come up
	I0224 13:04:12.312518  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has defined MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:12.312997  927487 main.go:141] libmachine: (test-preload-993368) DBG | unable to find current IP address of domain test-preload-993368 in network mk-test-preload-993368
	I0224 13:04:12.313023  927487 main.go:141] libmachine: (test-preload-993368) DBG | I0224 13:04:12.312970  927571 retry.go:31] will retry after 1.522924862s: waiting for domain to come up
	I0224 13:04:13.837923  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has defined MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:13.838353  927487 main.go:141] libmachine: (test-preload-993368) DBG | unable to find current IP address of domain test-preload-993368 in network mk-test-preload-993368
	I0224 13:04:13.838380  927487 main.go:141] libmachine: (test-preload-993368) DBG | I0224 13:04:13.838321  927571 retry.go:31] will retry after 2.23689365s: waiting for domain to come up
	I0224 13:04:16.077349  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has defined MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:16.077796  927487 main.go:141] libmachine: (test-preload-993368) DBG | unable to find current IP address of domain test-preload-993368 in network mk-test-preload-993368
	I0224 13:04:16.077851  927487 main.go:141] libmachine: (test-preload-993368) DBG | I0224 13:04:16.077772  927571 retry.go:31] will retry after 2.662267807s: waiting for domain to come up
	I0224 13:04:18.743649  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has defined MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:18.744055  927487 main.go:141] libmachine: (test-preload-993368) DBG | unable to find current IP address of domain test-preload-993368 in network mk-test-preload-993368
	I0224 13:04:18.744083  927487 main.go:141] libmachine: (test-preload-993368) DBG | I0224 13:04:18.744018  927571 retry.go:31] will retry after 3.595515071s: waiting for domain to come up
	I0224 13:04:22.341185  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has defined MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:22.341534  927487 main.go:141] libmachine: (test-preload-993368) DBG | unable to find current IP address of domain test-preload-993368 in network mk-test-preload-993368
	I0224 13:04:22.341558  927487 main.go:141] libmachine: (test-preload-993368) DBG | I0224 13:04:22.341490  927571 retry.go:31] will retry after 2.739335386s: waiting for domain to come up
	I0224 13:04:25.084567  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has defined MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:25.085084  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has current primary IP address 192.168.39.199 and MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:25.085111  927487 main.go:141] libmachine: (test-preload-993368) found domain IP: 192.168.39.199
	I0224 13:04:25.085123  927487 main.go:141] libmachine: (test-preload-993368) reserving static IP address...
	I0224 13:04:25.085604  927487 main.go:141] libmachine: (test-preload-993368) DBG | found host DHCP lease matching {name: "test-preload-993368", mac: "52:54:00:95:10:7a", ip: "192.168.39.199"} in network mk-test-preload-993368: {Iface:virbr1 ExpiryTime:2025-02-24 14:00:24 +0000 UTC Type:0 Mac:52:54:00:95:10:7a Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:test-preload-993368 Clientid:01:52:54:00:95:10:7a}
	I0224 13:04:25.085628  927487 main.go:141] libmachine: (test-preload-993368) DBG | skip adding static IP to network mk-test-preload-993368 - found existing host DHCP lease matching {name: "test-preload-993368", mac: "52:54:00:95:10:7a", ip: "192.168.39.199"}
	I0224 13:04:25.085642  927487 main.go:141] libmachine: (test-preload-993368) reserved static IP address 192.168.39.199 for domain test-preload-993368
	I0224 13:04:25.085675  927487 main.go:141] libmachine: (test-preload-993368) DBG | Getting to WaitForSSH function...
	I0224 13:04:25.085696  927487 main.go:141] libmachine: (test-preload-993368) waiting for SSH...
	I0224 13:04:25.087876  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has defined MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:25.088363  927487 main.go:141] libmachine: (test-preload-993368) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:10:7a", ip: ""} in network mk-test-preload-993368: {Iface:virbr1 ExpiryTime:2025-02-24 14:00:24 +0000 UTC Type:0 Mac:52:54:00:95:10:7a Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:test-preload-993368 Clientid:01:52:54:00:95:10:7a}
	I0224 13:04:25.088395  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has defined IP address 192.168.39.199 and MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:25.088578  927487 main.go:141] libmachine: (test-preload-993368) DBG | Using SSH client type: external
	I0224 13:04:25.088611  927487 main.go:141] libmachine: (test-preload-993368) DBG | Using SSH private key: /home/jenkins/minikube-integration/20451-887294/.minikube/machines/test-preload-993368/id_rsa (-rw-------)
	I0224 13:04:25.088642  927487 main.go:141] libmachine: (test-preload-993368) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.199 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20451-887294/.minikube/machines/test-preload-993368/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0224 13:04:25.088657  927487 main.go:141] libmachine: (test-preload-993368) DBG | About to run SSH command:
	I0224 13:04:25.088670  927487 main.go:141] libmachine: (test-preload-993368) DBG | exit 0
	I0224 13:04:25.217891  927487 main.go:141] libmachine: (test-preload-993368) DBG | SSH cmd err, output: <nil>: 
	I0224 13:04:25.218298  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetConfigRaw
	I0224 13:04:25.219046  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetIP
	I0224 13:04:25.221613  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has defined MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:25.221929  927487 main.go:141] libmachine: (test-preload-993368) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:10:7a", ip: ""} in network mk-test-preload-993368: {Iface:virbr1 ExpiryTime:2025-02-24 14:00:24 +0000 UTC Type:0 Mac:52:54:00:95:10:7a Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:test-preload-993368 Clientid:01:52:54:00:95:10:7a}
	I0224 13:04:25.221968  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has defined IP address 192.168.39.199 and MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:25.222234  927487 profile.go:143] Saving config to /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/test-preload-993368/config.json ...
	I0224 13:04:25.222456  927487 machine.go:93] provisionDockerMachine start ...
	I0224 13:04:25.222479  927487 main.go:141] libmachine: (test-preload-993368) Calling .DriverName
	I0224 13:04:25.222702  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHHostname
	I0224 13:04:25.225134  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has defined MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:25.225448  927487 main.go:141] libmachine: (test-preload-993368) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:10:7a", ip: ""} in network mk-test-preload-993368: {Iface:virbr1 ExpiryTime:2025-02-24 14:00:24 +0000 UTC Type:0 Mac:52:54:00:95:10:7a Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:test-preload-993368 Clientid:01:52:54:00:95:10:7a}
	I0224 13:04:25.225479  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has defined IP address 192.168.39.199 and MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:25.225612  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHPort
	I0224 13:04:25.225800  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHKeyPath
	I0224 13:04:25.226008  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHKeyPath
	I0224 13:04:25.226157  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHUsername
	I0224 13:04:25.226321  927487 main.go:141] libmachine: Using SSH client type: native
	I0224 13:04:25.226510  927487 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.199 22 <nil> <nil>}
	I0224 13:04:25.226522  927487 main.go:141] libmachine: About to run SSH command:
	hostname
	I0224 13:04:25.337924  927487 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0224 13:04:25.337954  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetMachineName
	I0224 13:04:25.338202  927487 buildroot.go:166] provisioning hostname "test-preload-993368"
	I0224 13:04:25.338221  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetMachineName
	I0224 13:04:25.338416  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHHostname
	I0224 13:04:25.341129  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has defined MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:25.341523  927487 main.go:141] libmachine: (test-preload-993368) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:10:7a", ip: ""} in network mk-test-preload-993368: {Iface:virbr1 ExpiryTime:2025-02-24 14:00:24 +0000 UTC Type:0 Mac:52:54:00:95:10:7a Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:test-preload-993368 Clientid:01:52:54:00:95:10:7a}
	I0224 13:04:25.341558  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has defined IP address 192.168.39.199 and MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:25.341678  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHPort
	I0224 13:04:25.341878  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHKeyPath
	I0224 13:04:25.342030  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHKeyPath
	I0224 13:04:25.342129  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHUsername
	I0224 13:04:25.342276  927487 main.go:141] libmachine: Using SSH client type: native
	I0224 13:04:25.342511  927487 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.199 22 <nil> <nil>}
	I0224 13:04:25.342531  927487 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-993368 && echo "test-preload-993368" | sudo tee /etc/hostname
	I0224 13:04:25.468944  927487 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-993368
	
	I0224 13:04:25.468975  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHHostname
	I0224 13:04:25.471649  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has defined MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:25.471966  927487 main.go:141] libmachine: (test-preload-993368) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:10:7a", ip: ""} in network mk-test-preload-993368: {Iface:virbr1 ExpiryTime:2025-02-24 14:00:24 +0000 UTC Type:0 Mac:52:54:00:95:10:7a Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:test-preload-993368 Clientid:01:52:54:00:95:10:7a}
	I0224 13:04:25.471992  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has defined IP address 192.168.39.199 and MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:25.472148  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHPort
	I0224 13:04:25.472387  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHKeyPath
	I0224 13:04:25.472553  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHKeyPath
	I0224 13:04:25.472699  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHUsername
	I0224 13:04:25.472863  927487 main.go:141] libmachine: Using SSH client type: native
	I0224 13:04:25.473058  927487 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.199 22 <nil> <nil>}
	I0224 13:04:25.473077  927487 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-993368' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-993368/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-993368' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0224 13:04:25.595059  927487 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0224 13:04:25.595110  927487 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20451-887294/.minikube CaCertPath:/home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20451-887294/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20451-887294/.minikube}
	I0224 13:04:25.595173  927487 buildroot.go:174] setting up certificates
	I0224 13:04:25.595198  927487 provision.go:84] configureAuth start
	I0224 13:04:25.595216  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetMachineName
	I0224 13:04:25.595539  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetIP
	I0224 13:04:25.598610  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has defined MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:25.599015  927487 main.go:141] libmachine: (test-preload-993368) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:10:7a", ip: ""} in network mk-test-preload-993368: {Iface:virbr1 ExpiryTime:2025-02-24 14:00:24 +0000 UTC Type:0 Mac:52:54:00:95:10:7a Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:test-preload-993368 Clientid:01:52:54:00:95:10:7a}
	I0224 13:04:25.599054  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has defined IP address 192.168.39.199 and MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:25.599283  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHHostname
	I0224 13:04:25.601762  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has defined MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:25.602096  927487 main.go:141] libmachine: (test-preload-993368) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:10:7a", ip: ""} in network mk-test-preload-993368: {Iface:virbr1 ExpiryTime:2025-02-24 14:00:24 +0000 UTC Type:0 Mac:52:54:00:95:10:7a Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:test-preload-993368 Clientid:01:52:54:00:95:10:7a}
	I0224 13:04:25.602132  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has defined IP address 192.168.39.199 and MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:25.602251  927487 provision.go:143] copyHostCerts
	I0224 13:04:25.602324  927487 exec_runner.go:144] found /home/jenkins/minikube-integration/20451-887294/.minikube/ca.pem, removing ...
	I0224 13:04:25.602336  927487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20451-887294/.minikube/ca.pem
	I0224 13:04:25.602400  927487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20451-887294/.minikube/ca.pem (1082 bytes)
	I0224 13:04:25.602516  927487 exec_runner.go:144] found /home/jenkins/minikube-integration/20451-887294/.minikube/cert.pem, removing ...
	I0224 13:04:25.602529  927487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20451-887294/.minikube/cert.pem
	I0224 13:04:25.602553  927487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20451-887294/.minikube/cert.pem (1123 bytes)
	I0224 13:04:25.602603  927487 exec_runner.go:144] found /home/jenkins/minikube-integration/20451-887294/.minikube/key.pem, removing ...
	I0224 13:04:25.602610  927487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20451-887294/.minikube/key.pem
	I0224 13:04:25.602630  927487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20451-887294/.minikube/key.pem (1679 bytes)
	I0224 13:04:25.602680  927487 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20451-887294/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca-key.pem org=jenkins.test-preload-993368 san=[127.0.0.1 192.168.39.199 localhost minikube test-preload-993368]
	I0224 13:04:25.827596  927487 provision.go:177] copyRemoteCerts
	I0224 13:04:25.827662  927487 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0224 13:04:25.827688  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHHostname
	I0224 13:04:25.830485  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has defined MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:25.830784  927487 main.go:141] libmachine: (test-preload-993368) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:10:7a", ip: ""} in network mk-test-preload-993368: {Iface:virbr1 ExpiryTime:2025-02-24 14:00:24 +0000 UTC Type:0 Mac:52:54:00:95:10:7a Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:test-preload-993368 Clientid:01:52:54:00:95:10:7a}
	I0224 13:04:25.830826  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has defined IP address 192.168.39.199 and MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:25.831015  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHPort
	I0224 13:04:25.831211  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHKeyPath
	I0224 13:04:25.831397  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHUsername
	I0224 13:04:25.831542  927487 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/test-preload-993368/id_rsa Username:docker}
	I0224 13:04:25.915983  927487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0224 13:04:25.941557  927487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0224 13:04:25.967079  927487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0224 13:04:25.992763  927487 provision.go:87] duration metric: took 397.549537ms to configureAuth
	I0224 13:04:25.992794  927487 buildroot.go:189] setting minikube options for container-runtime
	I0224 13:04:25.992965  927487 config.go:182] Loaded profile config "test-preload-993368": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0224 13:04:25.993057  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHHostname
	I0224 13:04:25.995832  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has defined MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:25.996201  927487 main.go:141] libmachine: (test-preload-993368) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:10:7a", ip: ""} in network mk-test-preload-993368: {Iface:virbr1 ExpiryTime:2025-02-24 14:00:24 +0000 UTC Type:0 Mac:52:54:00:95:10:7a Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:test-preload-993368 Clientid:01:52:54:00:95:10:7a}
	I0224 13:04:25.996233  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has defined IP address 192.168.39.199 and MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:25.996485  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHPort
	I0224 13:04:25.996710  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHKeyPath
	I0224 13:04:25.996864  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHKeyPath
	I0224 13:04:25.996971  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHUsername
	I0224 13:04:25.997135  927487 main.go:141] libmachine: Using SSH client type: native
	I0224 13:04:25.997324  927487 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.199 22 <nil> <nil>}
	I0224 13:04:25.997345  927487 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0224 13:04:26.234398  927487 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0224 13:04:26.234431  927487 machine.go:96] duration metric: took 1.011961126s to provisionDockerMachine
	I0224 13:04:26.234450  927487 start.go:293] postStartSetup for "test-preload-993368" (driver="kvm2")
	I0224 13:04:26.234462  927487 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0224 13:04:26.234508  927487 main.go:141] libmachine: (test-preload-993368) Calling .DriverName
	I0224 13:04:26.234856  927487 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0224 13:04:26.234886  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHHostname
	I0224 13:04:26.237431  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has defined MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:26.237763  927487 main.go:141] libmachine: (test-preload-993368) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:10:7a", ip: ""} in network mk-test-preload-993368: {Iface:virbr1 ExpiryTime:2025-02-24 14:00:24 +0000 UTC Type:0 Mac:52:54:00:95:10:7a Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:test-preload-993368 Clientid:01:52:54:00:95:10:7a}
	I0224 13:04:26.237792  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has defined IP address 192.168.39.199 and MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:26.237913  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHPort
	I0224 13:04:26.238144  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHKeyPath
	I0224 13:04:26.238303  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHUsername
	I0224 13:04:26.238491  927487 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/test-preload-993368/id_rsa Username:docker}
	I0224 13:04:26.324517  927487 ssh_runner.go:195] Run: cat /etc/os-release
	I0224 13:04:26.329337  927487 info.go:137] Remote host: Buildroot 2023.02.9
	I0224 13:04:26.329371  927487 filesync.go:126] Scanning /home/jenkins/minikube-integration/20451-887294/.minikube/addons for local assets ...
	I0224 13:04:26.329439  927487 filesync.go:126] Scanning /home/jenkins/minikube-integration/20451-887294/.minikube/files for local assets ...
	I0224 13:04:26.329537  927487 filesync.go:149] local asset: /home/jenkins/minikube-integration/20451-887294/.minikube/files/etc/ssl/certs/8945642.pem -> 8945642.pem in /etc/ssl/certs
	I0224 13:04:26.329634  927487 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0224 13:04:26.339993  927487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/files/etc/ssl/certs/8945642.pem --> /etc/ssl/certs/8945642.pem (1708 bytes)
	I0224 13:04:26.366079  927487 start.go:296] duration metric: took 131.608383ms for postStartSetup
	I0224 13:04:26.366141  927487 fix.go:56] duration metric: took 20.500774586s for fixHost
	I0224 13:04:26.366171  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHHostname
	I0224 13:04:26.369053  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has defined MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:26.369433  927487 main.go:141] libmachine: (test-preload-993368) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:10:7a", ip: ""} in network mk-test-preload-993368: {Iface:virbr1 ExpiryTime:2025-02-24 14:00:24 +0000 UTC Type:0 Mac:52:54:00:95:10:7a Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:test-preload-993368 Clientid:01:52:54:00:95:10:7a}
	I0224 13:04:26.369465  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has defined IP address 192.168.39.199 and MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:26.369686  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHPort
	I0224 13:04:26.369951  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHKeyPath
	I0224 13:04:26.370148  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHKeyPath
	I0224 13:04:26.370284  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHUsername
	I0224 13:04:26.370448  927487 main.go:141] libmachine: Using SSH client type: native
	I0224 13:04:26.370623  927487 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.199 22 <nil> <nil>}
	I0224 13:04:26.370635  927487 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0224 13:04:26.482510  927487 main.go:141] libmachine: SSH cmd err, output: <nil>: 1740402266.456332994
	
	I0224 13:04:26.482533  927487 fix.go:216] guest clock: 1740402266.456332994
	I0224 13:04:26.482541  927487 fix.go:229] Guest: 2025-02-24 13:04:26.456332994 +0000 UTC Remote: 2025-02-24 13:04:26.3661473 +0000 UTC m=+34.474089422 (delta=90.185694ms)
	I0224 13:04:26.482585  927487 fix.go:200] guest clock delta is within tolerance: 90.185694ms
	I0224 13:04:26.482591  927487 start.go:83] releasing machines lock for "test-preload-993368", held for 20.617241652s
	I0224 13:04:26.482611  927487 main.go:141] libmachine: (test-preload-993368) Calling .DriverName
	I0224 13:04:26.482932  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetIP
	I0224 13:04:26.485874  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has defined MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:26.486221  927487 main.go:141] libmachine: (test-preload-993368) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:10:7a", ip: ""} in network mk-test-preload-993368: {Iface:virbr1 ExpiryTime:2025-02-24 14:00:24 +0000 UTC Type:0 Mac:52:54:00:95:10:7a Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:test-preload-993368 Clientid:01:52:54:00:95:10:7a}
	I0224 13:04:26.486254  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has defined IP address 192.168.39.199 and MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:26.486401  927487 main.go:141] libmachine: (test-preload-993368) Calling .DriverName
	I0224 13:04:26.487018  927487 main.go:141] libmachine: (test-preload-993368) Calling .DriverName
	I0224 13:04:26.487242  927487 main.go:141] libmachine: (test-preload-993368) Calling .DriverName
	I0224 13:04:26.487360  927487 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0224 13:04:26.487407  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHHostname
	I0224 13:04:26.487535  927487 ssh_runner.go:195] Run: cat /version.json
	I0224 13:04:26.487579  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHHostname
	I0224 13:04:26.490160  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has defined MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:26.490304  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has defined MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:26.490532  927487 main.go:141] libmachine: (test-preload-993368) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:10:7a", ip: ""} in network mk-test-preload-993368: {Iface:virbr1 ExpiryTime:2025-02-24 14:00:24 +0000 UTC Type:0 Mac:52:54:00:95:10:7a Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:test-preload-993368 Clientid:01:52:54:00:95:10:7a}
	I0224 13:04:26.490560  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has defined IP address 192.168.39.199 and MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:26.490658  927487 main.go:141] libmachine: (test-preload-993368) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:10:7a", ip: ""} in network mk-test-preload-993368: {Iface:virbr1 ExpiryTime:2025-02-24 14:00:24 +0000 UTC Type:0 Mac:52:54:00:95:10:7a Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:test-preload-993368 Clientid:01:52:54:00:95:10:7a}
	I0224 13:04:26.490696  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has defined IP address 192.168.39.199 and MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:26.490755  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHPort
	I0224 13:04:26.490888  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHPort
	I0224 13:04:26.490972  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHKeyPath
	I0224 13:04:26.491010  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHKeyPath
	I0224 13:04:26.491091  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHUsername
	I0224 13:04:26.491177  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHUsername
	I0224 13:04:26.491256  927487 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/test-preload-993368/id_rsa Username:docker}
	I0224 13:04:26.491323  927487 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/test-preload-993368/id_rsa Username:docker}
	I0224 13:04:26.598481  927487 ssh_runner.go:195] Run: systemctl --version
	I0224 13:04:26.605002  927487 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0224 13:04:26.760177  927487 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0224 13:04:26.766770  927487 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0224 13:04:26.766854  927487 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0224 13:04:26.785187  927487 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0224 13:04:26.785217  927487 start.go:495] detecting cgroup driver to use...
	I0224 13:04:26.785287  927487 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0224 13:04:26.805132  927487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0224 13:04:26.821324  927487 docker.go:217] disabling cri-docker service (if available) ...
	I0224 13:04:26.821396  927487 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0224 13:04:26.836824  927487 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0224 13:04:26.852457  927487 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0224 13:04:26.972354  927487 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0224 13:04:27.145559  927487 docker.go:233] disabling docker service ...
	I0224 13:04:27.145649  927487 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0224 13:04:27.160658  927487 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0224 13:04:27.174828  927487 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0224 13:04:27.296357  927487 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0224 13:04:27.424252  927487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0224 13:04:27.440328  927487 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 13:04:27.460061  927487 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0224 13:04:27.460137  927487 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 13:04:27.471691  927487 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0224 13:04:27.471776  927487 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 13:04:27.483405  927487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 13:04:27.495411  927487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 13:04:27.506912  927487 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0224 13:04:27.519734  927487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 13:04:27.531040  927487 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 13:04:27.550809  927487 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 13:04:27.562428  927487 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0224 13:04:27.572841  927487 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0224 13:04:27.572903  927487 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0224 13:04:27.586431  927487 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0224 13:04:27.596708  927487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 13:04:27.719931  927487 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0224 13:04:27.812749  927487 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0224 13:04:27.812835  927487 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0224 13:04:27.818380  927487 start.go:563] Will wait 60s for crictl version
	I0224 13:04:27.818450  927487 ssh_runner.go:195] Run: which crictl
	I0224 13:04:27.822558  927487 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0224 13:04:27.865464  927487 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0224 13:04:27.865557  927487 ssh_runner.go:195] Run: crio --version
	I0224 13:04:27.895375  927487 ssh_runner.go:195] Run: crio --version
	I0224 13:04:27.925952  927487 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0224 13:04:27.927415  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetIP
	I0224 13:04:27.930536  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has defined MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:27.930948  927487 main.go:141] libmachine: (test-preload-993368) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:10:7a", ip: ""} in network mk-test-preload-993368: {Iface:virbr1 ExpiryTime:2025-02-24 14:00:24 +0000 UTC Type:0 Mac:52:54:00:95:10:7a Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:test-preload-993368 Clientid:01:52:54:00:95:10:7a}
	I0224 13:04:27.930980  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has defined IP address 192.168.39.199 and MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:27.931147  927487 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0224 13:04:27.935606  927487 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 13:04:27.948436  927487 kubeadm.go:883] updating cluster {Name:test-preload-993368 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-prelo
ad-993368 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.199 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0224 13:04:27.948587  927487 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0224 13:04:27.948638  927487 ssh_runner.go:195] Run: sudo crictl images --output json
	I0224 13:04:27.984691  927487 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0224 13:04:27.984762  927487 ssh_runner.go:195] Run: which lz4
	I0224 13:04:27.989086  927487 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0224 13:04:27.993502  927487 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0224 13:04:27.993540  927487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0224 13:04:29.652240  927487 crio.go:462] duration metric: took 1.663192102s to copy over tarball
	I0224 13:04:29.652347  927487 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0224 13:04:32.157105  927487 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.50471152s)
	I0224 13:04:32.157140  927487 crio.go:469] duration metric: took 2.504858185s to extract the tarball
	I0224 13:04:32.157153  927487 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0224 13:04:32.200494  927487 ssh_runner.go:195] Run: sudo crictl images --output json
	I0224 13:04:32.243675  927487 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0224 13:04:32.243706  927487 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0224 13:04:32.243818  927487 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0224 13:04:32.243848  927487 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0224 13:04:32.243854  927487 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0224 13:04:32.243869  927487 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0224 13:04:32.243858  927487 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0224 13:04:32.243818  927487 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0224 13:04:32.243824  927487 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0224 13:04:32.243827  927487 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0224 13:04:32.245428  927487 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0224 13:04:32.245643  927487 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0224 13:04:32.245688  927487 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0224 13:04:32.245698  927487 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0224 13:04:32.245785  927487 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0224 13:04:32.245862  927487 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0224 13:04:32.245948  927487 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0224 13:04:32.246115  927487 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0224 13:04:32.410472  927487 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0224 13:04:32.441812  927487 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0224 13:04:32.442116  927487 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0224 13:04:32.450581  927487 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0224 13:04:32.464851  927487 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0224 13:04:32.464927  927487 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0224 13:04:32.464978  927487 ssh_runner.go:195] Run: which crictl
	I0224 13:04:32.503922  927487 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0224 13:04:32.503971  927487 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0224 13:04:32.504012  927487 ssh_runner.go:195] Run: which crictl
	I0224 13:04:32.535952  927487 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0224 13:04:32.536003  927487 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0224 13:04:32.536021  927487 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0224 13:04:32.536054  927487 ssh_runner.go:195] Run: which crictl
	I0224 13:04:32.536084  927487 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0224 13:04:32.536142  927487 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0224 13:04:32.536054  927487 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0224 13:04:32.536243  927487 ssh_runner.go:195] Run: which crictl
	I0224 13:04:32.588169  927487 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0224 13:04:32.588238  927487 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0224 13:04:32.603819  927487 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0224 13:04:32.606471  927487 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0224 13:04:32.606471  927487 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0224 13:04:32.607280  927487 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0224 13:04:32.644586  927487 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0224 13:04:32.689785  927487 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0224 13:04:32.689808  927487 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0224 13:04:32.758804  927487 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0224 13:04:32.758860  927487 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0224 13:04:32.758918  927487 ssh_runner.go:195] Run: which crictl
	I0224 13:04:32.758922  927487 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0224 13:04:32.758954  927487 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0224 13:04:32.765757  927487 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0224 13:04:32.765809  927487 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0224 13:04:32.765888  927487 ssh_runner.go:195] Run: which crictl
	I0224 13:04:32.852328  927487 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0224 13:04:32.852375  927487 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0224 13:04:32.852397  927487 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20451-887294/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0224 13:04:32.852434  927487 ssh_runner.go:195] Run: which crictl
	I0224 13:04:32.852498  927487 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0224 13:04:32.852523  927487 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0224 13:04:32.852597  927487 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0224 13:04:32.866576  927487 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20451-887294/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0224 13:04:32.866582  927487 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0224 13:04:32.866674  927487 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0224 13:04:32.866606  927487 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0224 13:04:32.879652  927487 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0224 13:04:32.972550  927487 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0224 13:04:32.973955  927487 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0224 13:04:32.973979  927487 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0224 13:04:32.974028  927487 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0224 13:04:32.974105  927487 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20451-887294/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0224 13:04:32.974186  927487 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20451-887294/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0224 13:04:32.974206  927487 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0224 13:04:32.974231  927487 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0224 13:04:32.974253  927487 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0224 13:04:32.974269  927487 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0224 13:04:32.997416  927487 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0224 13:04:33.071247  927487 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0224 13:04:33.338524  927487 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0224 13:04:35.913867  927487 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4: (2.939805223s)
	I0224 13:04:35.913922  927487 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: (2.939632535s)
	I0224 13:04:35.913934  927487 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20451-887294/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0224 13:04:35.913956  927487 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0224 13:04:35.913965  927487 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0224 13:04:35.913983  927487 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4: (2.939760606s)
	I0224 13:04:35.914018  927487 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0224 13:04:35.914036  927487 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0224 13:04:35.914061  927487 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4: (2.939803678s)
	I0224 13:04:35.914091  927487 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0224 13:04:35.914100  927487 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0: (2.916657409s)
	I0224 13:04:35.914160  927487 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4: (2.842883068s)
	I0224 13:04:35.914191  927487 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0224 13:04:35.914195  927487 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20451-887294/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0224 13:04:35.914242  927487 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.575686987s)
	I0224 13:04:35.914303  927487 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0224 13:04:36.387667  927487 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20451-887294/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0224 13:04:36.387705  927487 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20451-887294/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0224 13:04:36.387738  927487 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0224 13:04:36.387797  927487 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0224 13:04:36.387819  927487 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20451-887294/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0224 13:04:36.387866  927487 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0224 13:04:36.387799  927487 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0224 13:04:36.387922  927487 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0224 13:04:36.393018  927487 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0224 13:04:36.396635  927487 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0224 13:04:36.528273  927487 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20451-887294/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0224 13:04:36.528310  927487 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0224 13:04:36.528367  927487 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0224 13:04:37.275758  927487 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20451-887294/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0224 13:04:37.275802  927487 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0224 13:04:37.275864  927487 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0224 13:04:37.725262  927487 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20451-887294/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0224 13:04:37.725297  927487 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0224 13:04:37.725372  927487 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0224 13:04:39.878984  927487 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.153574584s)
	I0224 13:04:39.879028  927487 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20451-887294/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0224 13:04:39.879047  927487 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0224 13:04:39.879098  927487 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0224 13:04:40.623578  927487 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20451-887294/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0224 13:04:40.623637  927487 cache_images.go:123] Successfully loaded all cached images
	I0224 13:04:40.623645  927487 cache_images.go:92] duration metric: took 8.379925779s to LoadCachedImages
	I0224 13:04:40.623663  927487 kubeadm.go:934] updating node { 192.168.39.199 8443 v1.24.4 crio true true} ...
	I0224 13:04:40.623817  927487 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-993368 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.199
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-993368 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0224 13:04:40.623888  927487 ssh_runner.go:195] Run: crio config
	I0224 13:04:40.672789  927487 cni.go:84] Creating CNI manager for ""
	I0224 13:04:40.672814  927487 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0224 13:04:40.672825  927487 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0224 13:04:40.672844  927487 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.199 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-993368 NodeName:test-preload-993368 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.199"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.199 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0224 13:04:40.673007  927487 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.199
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-993368"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.199
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.199"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0224 13:04:40.673076  927487 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0224 13:04:40.684106  927487 binaries.go:44] Found k8s binaries, skipping transfer
	I0224 13:04:40.684213  927487 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0224 13:04:40.694290  927487 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0224 13:04:40.712759  927487 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0224 13:04:40.730527  927487 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0224 13:04:40.748629  927487 ssh_runner.go:195] Run: grep 192.168.39.199	control-plane.minikube.internal$ /etc/hosts
	I0224 13:04:40.752871  927487 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.199	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 13:04:40.765871  927487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 13:04:40.880362  927487 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0224 13:04:40.897628  927487 certs.go:68] Setting up /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/test-preload-993368 for IP: 192.168.39.199
	I0224 13:04:40.897660  927487 certs.go:194] generating shared ca certs ...
	I0224 13:04:40.897683  927487 certs.go:226] acquiring lock for ca certs: {Name:mk38777c6b180f63d1816020cff79a01106ddf33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 13:04:40.897875  927487 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20451-887294/.minikube/ca.key
	I0224 13:04:40.897951  927487 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20451-887294/.minikube/proxy-client-ca.key
	I0224 13:04:40.897964  927487 certs.go:256] generating profile certs ...
	I0224 13:04:40.898075  927487 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/test-preload-993368/client.key
	I0224 13:04:40.898164  927487 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/test-preload-993368/apiserver.key.4ec3c6c0
	I0224 13:04:40.898216  927487 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/test-preload-993368/proxy-client.key
	I0224 13:04:40.898392  927487 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/894564.pem (1338 bytes)
	W0224 13:04:40.898442  927487 certs.go:480] ignoring /home/jenkins/minikube-integration/20451-887294/.minikube/certs/894564_empty.pem, impossibly tiny 0 bytes
	I0224 13:04:40.898456  927487 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca-key.pem (1679 bytes)
	I0224 13:04:40.898490  927487 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem (1082 bytes)
	I0224 13:04:40.898525  927487 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/cert.pem (1123 bytes)
	I0224 13:04:40.898553  927487 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/key.pem (1679 bytes)
	I0224 13:04:40.898608  927487 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/files/etc/ssl/certs/8945642.pem (1708 bytes)
	I0224 13:04:40.899549  927487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0224 13:04:40.964459  927487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0224 13:04:41.005137  927487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0224 13:04:41.042959  927487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0224 13:04:41.073999  927487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/test-preload-993368/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0224 13:04:41.105254  927487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/test-preload-993368/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0224 13:04:41.145921  927487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/test-preload-993368/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0224 13:04:41.171120  927487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/test-preload-993368/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0224 13:04:41.198105  927487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0224 13:04:41.224460  927487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/certs/894564.pem --> /usr/share/ca-certificates/894564.pem (1338 bytes)
	I0224 13:04:41.249996  927487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/files/etc/ssl/certs/8945642.pem --> /usr/share/ca-certificates/8945642.pem (1708 bytes)
	I0224 13:04:41.275638  927487 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0224 13:04:41.294295  927487 ssh_runner.go:195] Run: openssl version
	I0224 13:04:41.300831  927487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0224 13:04:41.312584  927487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0224 13:04:41.317603  927487 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 24 12:01 /usr/share/ca-certificates/minikubeCA.pem
	I0224 13:04:41.317669  927487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0224 13:04:41.323851  927487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0224 13:04:41.335551  927487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/894564.pem && ln -fs /usr/share/ca-certificates/894564.pem /etc/ssl/certs/894564.pem"
	I0224 13:04:41.347360  927487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/894564.pem
	I0224 13:04:41.352363  927487 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 24 12:09 /usr/share/ca-certificates/894564.pem
	I0224 13:04:41.352429  927487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/894564.pem
	I0224 13:04:41.358721  927487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/894564.pem /etc/ssl/certs/51391683.0"
	I0224 13:04:41.370736  927487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8945642.pem && ln -fs /usr/share/ca-certificates/8945642.pem /etc/ssl/certs/8945642.pem"
	I0224 13:04:41.382972  927487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8945642.pem
	I0224 13:04:41.388047  927487 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 24 12:09 /usr/share/ca-certificates/8945642.pem
	I0224 13:04:41.388126  927487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8945642.pem
	I0224 13:04:41.394378  927487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8945642.pem /etc/ssl/certs/3ec20f2e.0"
	I0224 13:04:41.406187  927487 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0224 13:04:41.411300  927487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0224 13:04:41.417944  927487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0224 13:04:41.424579  927487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0224 13:04:41.431016  927487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0224 13:04:41.437466  927487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0224 13:04:41.443963  927487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0224 13:04:41.450544  927487 kubeadm.go:392] StartCluster: {Name:test-preload-993368 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-
993368 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.199 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0224 13:04:41.450650  927487 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0224 13:04:41.450703  927487 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0224 13:04:41.492579  927487 cri.go:89] found id: ""
	I0224 13:04:41.492646  927487 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0224 13:04:41.503277  927487 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0224 13:04:41.503301  927487 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0224 13:04:41.503357  927487 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0224 13:04:41.513427  927487 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0224 13:04:41.513970  927487 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-993368" does not appear in /home/jenkins/minikube-integration/20451-887294/kubeconfig
	I0224 13:04:41.514096  927487 kubeconfig.go:62] /home/jenkins/minikube-integration/20451-887294/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-993368" cluster setting kubeconfig missing "test-preload-993368" context setting]
	I0224 13:04:41.514441  927487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-887294/kubeconfig: {Name:mk0122b69f41cd40d5267f436266ccce22ce5ef2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 13:04:41.515037  927487 kapi.go:59] client config for test-preload-993368: &rest.Config{Host:"https://192.168.39.199:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20451-887294/.minikube/profiles/test-preload-993368/client.crt", KeyFile:"/home/jenkins/minikube-integration/20451-887294/.minikube/profiles/test-preload-993368/client.key", CAFile:"/home/jenkins/minikube-integration/20451-887294/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24da640), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0224 13:04:41.515426  927487 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0224 13:04:41.515440  927487 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0224 13:04:41.515444  927487 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0224 13:04:41.515448  927487 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0224 13:04:41.515840  927487 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0224 13:04:41.526048  927487 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.199
	I0224 13:04:41.526088  927487 kubeadm.go:1160] stopping kube-system containers ...
	I0224 13:04:41.526105  927487 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0224 13:04:41.526193  927487 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0224 13:04:41.562446  927487 cri.go:89] found id: ""
	I0224 13:04:41.562553  927487 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0224 13:04:41.580428  927487 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0224 13:04:41.590878  927487 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0224 13:04:41.590899  927487 kubeadm.go:157] found existing configuration files:
	
	I0224 13:04:41.590969  927487 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0224 13:04:41.600433  927487 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0224 13:04:41.600507  927487 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0224 13:04:41.610346  927487 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0224 13:04:41.619601  927487 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0224 13:04:41.619661  927487 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0224 13:04:41.629537  927487 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0224 13:04:41.638994  927487 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0224 13:04:41.639052  927487 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0224 13:04:41.648994  927487 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0224 13:04:41.658595  927487 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0224 13:04:41.658659  927487 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0224 13:04:41.669075  927487 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0224 13:04:41.679337  927487 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0224 13:04:41.785585  927487 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0224 13:04:42.528740  927487 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0224 13:04:42.802648  927487 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0224 13:04:42.867903  927487 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0224 13:04:42.933967  927487 api_server.go:52] waiting for apiserver process to appear ...
	I0224 13:04:42.934074  927487 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:04:43.435148  927487 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:04:43.934311  927487 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:04:44.026297  927487 api_server.go:72] duration metric: took 1.092328057s to wait for apiserver process to appear ...
	I0224 13:04:44.026336  927487 api_server.go:88] waiting for apiserver healthz status ...
	I0224 13:04:44.026364  927487 api_server.go:253] Checking apiserver healthz at https://192.168.39.199:8443/healthz ...
	I0224 13:04:44.026983  927487 api_server.go:269] stopped: https://192.168.39.199:8443/healthz: Get "https://192.168.39.199:8443/healthz": dial tcp 192.168.39.199:8443: connect: connection refused
	I0224 13:04:44.526658  927487 api_server.go:253] Checking apiserver healthz at https://192.168.39.199:8443/healthz ...
	I0224 13:04:48.278559  927487 api_server.go:279] https://192.168.39.199:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0224 13:04:48.278605  927487 api_server.go:103] status: https://192.168.39.199:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0224 13:04:48.278628  927487 api_server.go:253] Checking apiserver healthz at https://192.168.39.199:8443/healthz ...
	I0224 13:04:48.314936  927487 api_server.go:279] https://192.168.39.199:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0224 13:04:48.314977  927487 api_server.go:103] status: https://192.168.39.199:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0224 13:04:48.527490  927487 api_server.go:253] Checking apiserver healthz at https://192.168.39.199:8443/healthz ...
	I0224 13:04:48.535669  927487 api_server.go:279] https://192.168.39.199:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0224 13:04:48.535705  927487 api_server.go:103] status: https://192.168.39.199:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0224 13:04:49.027364  927487 api_server.go:253] Checking apiserver healthz at https://192.168.39.199:8443/healthz ...
	I0224 13:04:49.034158  927487 api_server.go:279] https://192.168.39.199:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0224 13:04:49.034196  927487 api_server.go:103] status: https://192.168.39.199:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0224 13:04:49.526857  927487 api_server.go:253] Checking apiserver healthz at https://192.168.39.199:8443/healthz ...
	I0224 13:04:49.536119  927487 api_server.go:279] https://192.168.39.199:8443/healthz returned 200:
	ok
	I0224 13:04:49.543371  927487 api_server.go:141] control plane version: v1.24.4
	I0224 13:04:49.543402  927487 api_server.go:131] duration metric: took 5.517059075s to wait for apiserver health ...
	I0224 13:04:49.543411  927487 cni.go:84] Creating CNI manager for ""
	I0224 13:04:49.543418  927487 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0224 13:04:49.545385  927487 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0224 13:04:49.546567  927487 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0224 13:04:49.559544  927487 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0224 13:04:49.581141  927487 system_pods.go:43] waiting for kube-system pods to appear ...
	I0224 13:04:49.585268  927487 system_pods.go:59] 7 kube-system pods found
	I0224 13:04:49.585321  927487 system_pods.go:61] "coredns-6d4b75cb6d-hg8v8" [ce810ee2-61f4-4f98-bbf2-63e5bc94187d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0224 13:04:49.585339  927487 system_pods.go:61] "etcd-test-preload-993368" [8e169805-788e-4490-9780-fd080287bf4b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0224 13:04:49.585354  927487 system_pods.go:61] "kube-apiserver-test-preload-993368" [1698ab6d-5850-439f-aa2c-2c6cbab6248a] Running
	I0224 13:04:49.585360  927487 system_pods.go:61] "kube-controller-manager-test-preload-993368" [e1da10eb-36b5-4624-a2d8-1f821742466a] Running
	I0224 13:04:49.585366  927487 system_pods.go:61] "kube-proxy-jnpzp" [f4beeb46-3ac3-4062-a8a3-9d97177b03a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0224 13:04:49.585370  927487 system_pods.go:61] "kube-scheduler-test-preload-993368" [a206fdf3-3146-4826-975d-b89c5a10f7a9] Running
	I0224 13:04:49.585382  927487 system_pods.go:61] "storage-provisioner" [4966d82c-766b-4643-8328-43003ef09cb1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0224 13:04:49.585391  927487 system_pods.go:74] duration metric: took 4.21896ms to wait for pod list to return data ...
	I0224 13:04:49.585400  927487 node_conditions.go:102] verifying NodePressure condition ...
	I0224 13:04:49.588247  927487 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0224 13:04:49.588283  927487 node_conditions.go:123] node cpu capacity is 2
	I0224 13:04:49.588297  927487 node_conditions.go:105] duration metric: took 2.892929ms to run NodePressure ...
	I0224 13:04:49.588322  927487 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0224 13:04:49.905620  927487 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0224 13:04:49.911069  927487 kubeadm.go:739] kubelet initialised
	I0224 13:04:49.911098  927487 kubeadm.go:740] duration metric: took 5.44293ms waiting for restarted kubelet to initialise ...
	I0224 13:04:49.911108  927487 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 13:04:49.916614  927487 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-hg8v8" in "kube-system" namespace to be "Ready" ...
	I0224 13:04:49.924605  927487 pod_ready.go:98] node "test-preload-993368" hosting pod "coredns-6d4b75cb6d-hg8v8" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-993368" has status "Ready":"False"
	I0224 13:04:49.924631  927487 pod_ready.go:82] duration metric: took 7.98826ms for pod "coredns-6d4b75cb6d-hg8v8" in "kube-system" namespace to be "Ready" ...
	E0224 13:04:49.924642  927487 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-993368" hosting pod "coredns-6d4b75cb6d-hg8v8" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-993368" has status "Ready":"False"
	I0224 13:04:49.924648  927487 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-993368" in "kube-system" namespace to be "Ready" ...
	I0224 13:04:49.931577  927487 pod_ready.go:98] node "test-preload-993368" hosting pod "etcd-test-preload-993368" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-993368" has status "Ready":"False"
	I0224 13:04:49.931614  927487 pod_ready.go:82] duration metric: took 6.95255ms for pod "etcd-test-preload-993368" in "kube-system" namespace to be "Ready" ...
	E0224 13:04:49.931628  927487 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-993368" hosting pod "etcd-test-preload-993368" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-993368" has status "Ready":"False"
	I0224 13:04:49.931638  927487 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-993368" in "kube-system" namespace to be "Ready" ...
	I0224 13:04:49.937168  927487 pod_ready.go:98] node "test-preload-993368" hosting pod "kube-apiserver-test-preload-993368" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-993368" has status "Ready":"False"
	I0224 13:04:49.937203  927487 pod_ready.go:82] duration metric: took 5.546815ms for pod "kube-apiserver-test-preload-993368" in "kube-system" namespace to be "Ready" ...
	E0224 13:04:49.937215  927487 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-993368" hosting pod "kube-apiserver-test-preload-993368" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-993368" has status "Ready":"False"
	I0224 13:04:49.937224  927487 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-993368" in "kube-system" namespace to be "Ready" ...
	I0224 13:04:49.985988  927487 pod_ready.go:98] node "test-preload-993368" hosting pod "kube-controller-manager-test-preload-993368" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-993368" has status "Ready":"False"
	I0224 13:04:49.986029  927487 pod_ready.go:82] duration metric: took 48.790571ms for pod "kube-controller-manager-test-preload-993368" in "kube-system" namespace to be "Ready" ...
	E0224 13:04:49.986045  927487 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-993368" hosting pod "kube-controller-manager-test-preload-993368" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-993368" has status "Ready":"False"
	I0224 13:04:49.986054  927487 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-jnpzp" in "kube-system" namespace to be "Ready" ...
	I0224 13:04:50.389816  927487 pod_ready.go:98] node "test-preload-993368" hosting pod "kube-proxy-jnpzp" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-993368" has status "Ready":"False"
	I0224 13:04:50.389854  927487 pod_ready.go:82] duration metric: took 403.786441ms for pod "kube-proxy-jnpzp" in "kube-system" namespace to be "Ready" ...
	E0224 13:04:50.389868  927487 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-993368" hosting pod "kube-proxy-jnpzp" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-993368" has status "Ready":"False"
	I0224 13:04:50.389878  927487 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-993368" in "kube-system" namespace to be "Ready" ...
	I0224 13:04:50.785467  927487 pod_ready.go:98] node "test-preload-993368" hosting pod "kube-scheduler-test-preload-993368" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-993368" has status "Ready":"False"
	I0224 13:04:50.785499  927487 pod_ready.go:82] duration metric: took 395.61354ms for pod "kube-scheduler-test-preload-993368" in "kube-system" namespace to be "Ready" ...
	E0224 13:04:50.785509  927487 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-993368" hosting pod "kube-scheduler-test-preload-993368" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-993368" has status "Ready":"False"
	I0224 13:04:50.785518  927487 pod_ready.go:39] duration metric: took 874.398725ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 13:04:50.785549  927487 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0224 13:04:50.798834  927487 ops.go:34] apiserver oom_adj: -16
	I0224 13:04:50.798860  927487 kubeadm.go:597] duration metric: took 9.295553195s to restartPrimaryControlPlane
	I0224 13:04:50.798869  927487 kubeadm.go:394] duration metric: took 9.348333695s to StartCluster
	I0224 13:04:50.798888  927487 settings.go:142] acquiring lock: {Name:mk663e441d32b04abcccdab86db3e15276e74de8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 13:04:50.798956  927487 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20451-887294/kubeconfig
	I0224 13:04:50.799668  927487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-887294/kubeconfig: {Name:mk0122b69f41cd40d5267f436266ccce22ce5ef2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 13:04:50.799898  927487 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.199 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0224 13:04:50.799996  927487 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0224 13:04:50.800114  927487 addons.go:69] Setting storage-provisioner=true in profile "test-preload-993368"
	I0224 13:04:50.800138  927487 addons.go:238] Setting addon storage-provisioner=true in "test-preload-993368"
	I0224 13:04:50.800144  927487 config.go:182] Loaded profile config "test-preload-993368": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0224 13:04:50.800154  927487 addons.go:69] Setting default-storageclass=true in profile "test-preload-993368"
	W0224 13:04:50.800150  927487 addons.go:247] addon storage-provisioner should already be in state true
	I0224 13:04:50.800176  927487 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-993368"
	I0224 13:04:50.800238  927487 host.go:66] Checking if "test-preload-993368" exists ...
	I0224 13:04:50.800557  927487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:04:50.800604  927487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:04:50.800652  927487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:04:50.800703  927487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:04:50.801794  927487 out.go:177] * Verifying Kubernetes components...
	I0224 13:04:50.803213  927487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 13:04:50.816312  927487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45167
	I0224 13:04:50.816856  927487 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:04:50.817512  927487 main.go:141] libmachine: Using API Version  1
	I0224 13:04:50.817544  927487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:04:50.817955  927487 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:04:50.818242  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetState
	I0224 13:04:50.818761  927487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45053
	I0224 13:04:50.819195  927487 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:04:50.819752  927487 main.go:141] libmachine: Using API Version  1
	I0224 13:04:50.819774  927487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:04:50.820102  927487 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:04:50.820701  927487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:04:50.820754  927487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:04:50.821109  927487 kapi.go:59] client config for test-preload-993368: &rest.Config{Host:"https://192.168.39.199:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20451-887294/.minikube/profiles/test-preload-993368/client.crt", KeyFile:"/home/jenkins/minikube-integration/20451-887294/.minikube/profiles/test-preload-993368/client.key", CAFile:"/home/jenkins/minikube-integration/20451-887294/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24da640), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0224 13:04:50.821510  927487 addons.go:238] Setting addon default-storageclass=true in "test-preload-993368"
	W0224 13:04:50.821536  927487 addons.go:247] addon default-storageclass should already be in state true
	I0224 13:04:50.821564  927487 host.go:66] Checking if "test-preload-993368" exists ...
	I0224 13:04:50.821920  927487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:04:50.821971  927487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:04:50.836241  927487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42623
	I0224 13:04:50.836246  927487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42469
	I0224 13:04:50.836775  927487 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:04:50.836812  927487 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:04:50.837337  927487 main.go:141] libmachine: Using API Version  1
	I0224 13:04:50.837346  927487 main.go:141] libmachine: Using API Version  1
	I0224 13:04:50.837356  927487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:04:50.837361  927487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:04:50.837677  927487 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:04:50.837850  927487 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:04:50.837896  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetState
	I0224 13:04:50.838431  927487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:04:50.838480  927487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:04:50.839397  927487 main.go:141] libmachine: (test-preload-993368) Calling .DriverName
	I0224 13:04:50.841431  927487 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0224 13:04:50.842783  927487 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0224 13:04:50.842805  927487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0224 13:04:50.842826  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHHostname
	I0224 13:04:50.845338  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has defined MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:50.845716  927487 main.go:141] libmachine: (test-preload-993368) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:10:7a", ip: ""} in network mk-test-preload-993368: {Iface:virbr1 ExpiryTime:2025-02-24 14:00:24 +0000 UTC Type:0 Mac:52:54:00:95:10:7a Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:test-preload-993368 Clientid:01:52:54:00:95:10:7a}
	I0224 13:04:50.845745  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has defined IP address 192.168.39.199 and MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:50.845891  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHPort
	I0224 13:04:50.846087  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHKeyPath
	I0224 13:04:50.846213  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHUsername
	I0224 13:04:50.846326  927487 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/test-preload-993368/id_rsa Username:docker}
	I0224 13:04:50.872773  927487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44025
	I0224 13:04:50.873284  927487 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:04:50.873879  927487 main.go:141] libmachine: Using API Version  1
	I0224 13:04:50.873902  927487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:04:50.874229  927487 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:04:50.874414  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetState
	I0224 13:04:50.876277  927487 main.go:141] libmachine: (test-preload-993368) Calling .DriverName
	I0224 13:04:50.876527  927487 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0224 13:04:50.876549  927487 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0224 13:04:50.876572  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHHostname
	I0224 13:04:50.879225  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has defined MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:50.879636  927487 main.go:141] libmachine: (test-preload-993368) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:10:7a", ip: ""} in network mk-test-preload-993368: {Iface:virbr1 ExpiryTime:2025-02-24 14:00:24 +0000 UTC Type:0 Mac:52:54:00:95:10:7a Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:test-preload-993368 Clientid:01:52:54:00:95:10:7a}
	I0224 13:04:50.879668  927487 main.go:141] libmachine: (test-preload-993368) DBG | domain test-preload-993368 has defined IP address 192.168.39.199 and MAC address 52:54:00:95:10:7a in network mk-test-preload-993368
	I0224 13:04:50.879940  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHPort
	I0224 13:04:50.880117  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHKeyPath
	I0224 13:04:50.880289  927487 main.go:141] libmachine: (test-preload-993368) Calling .GetSSHUsername
	I0224 13:04:50.880413  927487 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/test-preload-993368/id_rsa Username:docker}
	I0224 13:04:50.981370  927487 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0224 13:04:50.997658  927487 node_ready.go:35] waiting up to 6m0s for node "test-preload-993368" to be "Ready" ...
	I0224 13:04:51.099026  927487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0224 13:04:51.106283  927487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0224 13:04:52.086436  927487 main.go:141] libmachine: Making call to close driver server
	I0224 13:04:52.086467  927487 main.go:141] libmachine: (test-preload-993368) Calling .Close
	I0224 13:04:52.086615  927487 main.go:141] libmachine: Making call to close driver server
	I0224 13:04:52.086637  927487 main.go:141] libmachine: (test-preload-993368) Calling .Close
	I0224 13:04:52.086808  927487 main.go:141] libmachine: (test-preload-993368) DBG | Closing plugin on server side
	I0224 13:04:52.086814  927487 main.go:141] libmachine: Successfully made call to close driver server
	I0224 13:04:52.086834  927487 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 13:04:52.086853  927487 main.go:141] libmachine: Making call to close driver server
	I0224 13:04:52.086867  927487 main.go:141] libmachine: (test-preload-993368) Calling .Close
	I0224 13:04:52.086885  927487 main.go:141] libmachine: Successfully made call to close driver server
	I0224 13:04:52.086906  927487 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 13:04:52.086916  927487 main.go:141] libmachine: Making call to close driver server
	I0224 13:04:52.086927  927487 main.go:141] libmachine: (test-preload-993368) Calling .Close
	I0224 13:04:52.087119  927487 main.go:141] libmachine: Successfully made call to close driver server
	I0224 13:04:52.087132  927487 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 13:04:52.087173  927487 main.go:141] libmachine: (test-preload-993368) DBG | Closing plugin on server side
	I0224 13:04:52.087203  927487 main.go:141] libmachine: Successfully made call to close driver server
	I0224 13:04:52.087209  927487 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 13:04:52.092601  927487 main.go:141] libmachine: Making call to close driver server
	I0224 13:04:52.092617  927487 main.go:141] libmachine: (test-preload-993368) Calling .Close
	I0224 13:04:52.092864  927487 main.go:141] libmachine: Successfully made call to close driver server
	I0224 13:04:52.092882  927487 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 13:04:52.092907  927487 main.go:141] libmachine: (test-preload-993368) DBG | Closing plugin on server side
	I0224 13:04:52.094847  927487 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0224 13:04:52.096166  927487 addons.go:514] duration metric: took 1.296183741s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0224 13:04:53.002120  927487 node_ready.go:53] node "test-preload-993368" has status "Ready":"False"
	I0224 13:04:55.002205  927487 node_ready.go:53] node "test-preload-993368" has status "Ready":"False"
	I0224 13:04:57.003115  927487 node_ready.go:53] node "test-preload-993368" has status "Ready":"False"
	I0224 13:04:59.002576  927487 node_ready.go:49] node "test-preload-993368" has status "Ready":"True"
	I0224 13:04:59.002614  927487 node_ready.go:38] duration metric: took 8.00491491s for node "test-preload-993368" to be "Ready" ...
	I0224 13:04:59.002629  927487 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 13:04:59.007139  927487 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-hg8v8" in "kube-system" namespace to be "Ready" ...
	I0224 13:04:59.011915  927487 pod_ready.go:93] pod "coredns-6d4b75cb6d-hg8v8" in "kube-system" namespace has status "Ready":"True"
	I0224 13:04:59.011955  927487 pod_ready.go:82] duration metric: took 4.782373ms for pod "coredns-6d4b75cb6d-hg8v8" in "kube-system" namespace to be "Ready" ...
	I0224 13:04:59.011970  927487 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-993368" in "kube-system" namespace to be "Ready" ...
	I0224 13:05:01.017201  927487 pod_ready.go:103] pod "etcd-test-preload-993368" in "kube-system" namespace has status "Ready":"False"
	I0224 13:05:01.518987  927487 pod_ready.go:93] pod "etcd-test-preload-993368" in "kube-system" namespace has status "Ready":"True"
	I0224 13:05:01.519020  927487 pod_ready.go:82] duration metric: took 2.507041116s for pod "etcd-test-preload-993368" in "kube-system" namespace to be "Ready" ...
	I0224 13:05:01.519034  927487 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-993368" in "kube-system" namespace to be "Ready" ...
	I0224 13:05:01.525823  927487 pod_ready.go:93] pod "kube-apiserver-test-preload-993368" in "kube-system" namespace has status "Ready":"True"
	I0224 13:05:01.525849  927487 pod_ready.go:82] duration metric: took 6.806686ms for pod "kube-apiserver-test-preload-993368" in "kube-system" namespace to be "Ready" ...
	I0224 13:05:01.525866  927487 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-993368" in "kube-system" namespace to be "Ready" ...
	I0224 13:05:03.036477  927487 pod_ready.go:93] pod "kube-controller-manager-test-preload-993368" in "kube-system" namespace has status "Ready":"True"
	I0224 13:05:03.036507  927487 pod_ready.go:82] duration metric: took 1.510632824s for pod "kube-controller-manager-test-preload-993368" in "kube-system" namespace to be "Ready" ...
	I0224 13:05:03.036520  927487 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jnpzp" in "kube-system" namespace to be "Ready" ...
	I0224 13:05:03.041774  927487 pod_ready.go:93] pod "kube-proxy-jnpzp" in "kube-system" namespace has status "Ready":"True"
	I0224 13:05:03.041812  927487 pod_ready.go:82] duration metric: took 5.28329ms for pod "kube-proxy-jnpzp" in "kube-system" namespace to be "Ready" ...
	I0224 13:05:03.041827  927487 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-993368" in "kube-system" namespace to be "Ready" ...
	I0224 13:05:03.046490  927487 pod_ready.go:93] pod "kube-scheduler-test-preload-993368" in "kube-system" namespace has status "Ready":"True"
	I0224 13:05:03.046534  927487 pod_ready.go:82] duration metric: took 4.696849ms for pod "kube-scheduler-test-preload-993368" in "kube-system" namespace to be "Ready" ...
	I0224 13:05:03.046550  927487 pod_ready.go:39] duration metric: took 4.043903956s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 13:05:03.046575  927487 api_server.go:52] waiting for apiserver process to appear ...
	I0224 13:05:03.046646  927487 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:05:03.064948  927487 api_server.go:72] duration metric: took 12.265013546s to wait for apiserver process to appear ...
	I0224 13:05:03.064986  927487 api_server.go:88] waiting for apiserver healthz status ...
	I0224 13:05:03.065012  927487 api_server.go:253] Checking apiserver healthz at https://192.168.39.199:8443/healthz ...
	I0224 13:05:03.071809  927487 api_server.go:279] https://192.168.39.199:8443/healthz returned 200:
	ok
	I0224 13:05:03.073020  927487 api_server.go:141] control plane version: v1.24.4
	I0224 13:05:03.073049  927487 api_server.go:131] duration metric: took 8.054144ms to wait for apiserver health ...
	I0224 13:05:03.073061  927487 system_pods.go:43] waiting for kube-system pods to appear ...
	I0224 13:05:03.203544  927487 system_pods.go:59] 7 kube-system pods found
	I0224 13:05:03.203577  927487 system_pods.go:61] "coredns-6d4b75cb6d-hg8v8" [ce810ee2-61f4-4f98-bbf2-63e5bc94187d] Running
	I0224 13:05:03.203583  927487 system_pods.go:61] "etcd-test-preload-993368" [8e169805-788e-4490-9780-fd080287bf4b] Running
	I0224 13:05:03.203587  927487 system_pods.go:61] "kube-apiserver-test-preload-993368" [1698ab6d-5850-439f-aa2c-2c6cbab6248a] Running
	I0224 13:05:03.203592  927487 system_pods.go:61] "kube-controller-manager-test-preload-993368" [e1da10eb-36b5-4624-a2d8-1f821742466a] Running
	I0224 13:05:03.203596  927487 system_pods.go:61] "kube-proxy-jnpzp" [f4beeb46-3ac3-4062-a8a3-9d97177b03a8] Running
	I0224 13:05:03.203599  927487 system_pods.go:61] "kube-scheduler-test-preload-993368" [a206fdf3-3146-4826-975d-b89c5a10f7a9] Running
	I0224 13:05:03.203602  927487 system_pods.go:61] "storage-provisioner" [4966d82c-766b-4643-8328-43003ef09cb1] Running
	I0224 13:05:03.203609  927487 system_pods.go:74] duration metric: took 130.540086ms to wait for pod list to return data ...
	I0224 13:05:03.203617  927487 default_sa.go:34] waiting for default service account to be created ...
	I0224 13:05:03.402225  927487 default_sa.go:45] found service account: "default"
	I0224 13:05:03.402253  927487 default_sa.go:55] duration metric: took 198.630269ms for default service account to be created ...
	I0224 13:05:03.402263  927487 system_pods.go:116] waiting for k8s-apps to be running ...
	I0224 13:05:03.603062  927487 system_pods.go:86] 7 kube-system pods found
	I0224 13:05:03.603096  927487 system_pods.go:89] "coredns-6d4b75cb6d-hg8v8" [ce810ee2-61f4-4f98-bbf2-63e5bc94187d] Running
	I0224 13:05:03.603102  927487 system_pods.go:89] "etcd-test-preload-993368" [8e169805-788e-4490-9780-fd080287bf4b] Running
	I0224 13:05:03.603106  927487 system_pods.go:89] "kube-apiserver-test-preload-993368" [1698ab6d-5850-439f-aa2c-2c6cbab6248a] Running
	I0224 13:05:03.603110  927487 system_pods.go:89] "kube-controller-manager-test-preload-993368" [e1da10eb-36b5-4624-a2d8-1f821742466a] Running
	I0224 13:05:03.603113  927487 system_pods.go:89] "kube-proxy-jnpzp" [f4beeb46-3ac3-4062-a8a3-9d97177b03a8] Running
	I0224 13:05:03.603118  927487 system_pods.go:89] "kube-scheduler-test-preload-993368" [a206fdf3-3146-4826-975d-b89c5a10f7a9] Running
	I0224 13:05:03.603123  927487 system_pods.go:89] "storage-provisioner" [4966d82c-766b-4643-8328-43003ef09cb1] Running
	I0224 13:05:03.603139  927487 system_pods.go:126] duration metric: took 200.861343ms to wait for k8s-apps to be running ...
	I0224 13:05:03.603160  927487 system_svc.go:44] waiting for kubelet service to be running ....
	I0224 13:05:03.603215  927487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 13:05:03.618674  927487 system_svc.go:56] duration metric: took 15.507526ms WaitForService to wait for kubelet
	I0224 13:05:03.618711  927487 kubeadm.go:582] duration metric: took 12.818786394s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0224 13:05:03.618732  927487 node_conditions.go:102] verifying NodePressure condition ...
	I0224 13:05:03.802285  927487 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0224 13:05:03.802315  927487 node_conditions.go:123] node cpu capacity is 2
	I0224 13:05:03.802327  927487 node_conditions.go:105] duration metric: took 183.590625ms to run NodePressure ...
	I0224 13:05:03.802341  927487 start.go:241] waiting for startup goroutines ...
	I0224 13:05:03.802348  927487 start.go:246] waiting for cluster config update ...
	I0224 13:05:03.802358  927487 start.go:255] writing updated cluster config ...
	I0224 13:05:03.802649  927487 ssh_runner.go:195] Run: rm -f paused
	I0224 13:05:03.855738  927487 start.go:600] kubectl: 1.32.2, cluster: 1.24.4 (minor skew: 8)
	I0224 13:05:03.857839  927487 out.go:201] 
	W0224 13:05:03.859302  927487 out.go:270] ! /usr/local/bin/kubectl is version 1.32.2, which may have incompatibilities with Kubernetes 1.24.4.
	I0224 13:05:03.860685  927487 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0224 13:05:03.862053  927487 out.go:177] * Done! kubectl is now configured to use "test-preload-993368" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 24 13:05:04 test-preload-993368 crio[670]: time="2025-02-24 13:05:04.804067029Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1740402304804045510,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d2bfc29f-b144-4862-b280-2fa7bf252408 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 13:05:04 test-preload-993368 crio[670]: time="2025-02-24 13:05:04.804669393Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b7baff3b-f69b-4d9e-9f46-998efe0a71cc name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:05:04 test-preload-993368 crio[670]: time="2025-02-24 13:05:04.804721422Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b7baff3b-f69b-4d9e-9f46-998efe0a71cc name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:05:04 test-preload-993368 crio[670]: time="2025-02-24 13:05:04.804883248Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4f98a313df4b0155c42bfcfc4560c6d49ca7dce36f76dbca43cff2b00de7d662,PodSandboxId:128866625b44acba61ca1b88ffcfd1b123a6e3d24112697408d6523b2cf7bff7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1740402297245422353,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-hg8v8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce810ee2-61f4-4f98-bbf2-63e5bc94187d,},Annotations:map[string]string{io.kubernetes.container.hash: 50a7a390,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6e7b6176a50e87c9b6d5d32d793be62638484ebcc3093e045c24ba898b4d3a4,PodSandboxId:2b8a126e6b4f7c67d76e9158b825b0687f01ce5856b7a21d04552f38cce04bae,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1740402290274729414,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 4966d82c-766b-4643-8328-43003ef09cb1,},Annotations:map[string]string{io.kubernetes.container.hash: 1d03e370,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ada187e00b4a666e4323a88e8f6dfc1df141b59b2f89fc383da5d66a3fffdac,PodSandboxId:ce11331f6b39d5bd7f3b41e39468a51d8d429aa58ad753286b66dce2782fbcaf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1740402289752876918,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jnpzp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4
beeb46-3ac3-4062-a8a3-9d97177b03a8,},Annotations:map[string]string{io.kubernetes.container.hash: d640a0af,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74d3b5f060cf9de6048efbed38efca85a5af53aa086496e552ec35a1a6fe02e8,PodSandboxId:257a7f15ca117a9b5136a5a290710530c52d4bcabc6bd96f34a42b03991cf3a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1740402283715698494,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-993368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36c374b9ef72b8360fccf87e5835c304,},Anno
tations:map[string]string{io.kubernetes.container.hash: a6470d0d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1412cb3e8167d9e849bcb9f7d81b3c0e619cc4a963d8405b25ccc85e7bde4c8e,PodSandboxId:b0c8227c9cc2478d12f5111491c9b7510d33baa1b549d936bb28da30785dc9f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1740402283759349339,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-993368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4605010d54ad32574b3c7ecac5bf2192,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9e7235e8313c66f581e1f614ac74b150d3222b6c3ca35b07afe711887faa3b8,PodSandboxId:4857d0d9d107d6b1b4b4e8cdfc1134204ac93586c2849240af87cb0d04b051a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1740402283688876003,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-993368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 148acd0546e45981d779f002e95ac45a,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 85887793,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49da579e64dd782d062b1f67641597c573e5fb0f252a6f1151ff68ad2251763e,PodSandboxId:01f0adae4d6844d69b5e7004742d5be32100793b123f3f56a96a22abec8e1a87,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1740402283670014433,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-993368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 608e10b8bddf518fc47e61093f4f0d4f,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b7baff3b-f69b-4d9e-9f46-998efe0a71cc name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:05:04 test-preload-993368 crio[670]: time="2025-02-24 13:05:04.843280381Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a151d51e-c59b-458c-89ab-300678dfbf83 name=/runtime.v1.RuntimeService/Version
	Feb 24 13:05:04 test-preload-993368 crio[670]: time="2025-02-24 13:05:04.843353287Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a151d51e-c59b-458c-89ab-300678dfbf83 name=/runtime.v1.RuntimeService/Version
	Feb 24 13:05:04 test-preload-993368 crio[670]: time="2025-02-24 13:05:04.844579315Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5069f07d-b57e-4f18-a44b-a9607f23ddd1 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 13:05:04 test-preload-993368 crio[670]: time="2025-02-24 13:05:04.845296873Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1740402304845273131,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5069f07d-b57e-4f18-a44b-a9607f23ddd1 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 13:05:04 test-preload-993368 crio[670]: time="2025-02-24 13:05:04.845789695Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ef83c68a-085d-46e5-9c7b-5a2c02d62565 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:05:04 test-preload-993368 crio[670]: time="2025-02-24 13:05:04.846058256Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ef83c68a-085d-46e5-9c7b-5a2c02d62565 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:05:04 test-preload-993368 crio[670]: time="2025-02-24 13:05:04.846289642Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4f98a313df4b0155c42bfcfc4560c6d49ca7dce36f76dbca43cff2b00de7d662,PodSandboxId:128866625b44acba61ca1b88ffcfd1b123a6e3d24112697408d6523b2cf7bff7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1740402297245422353,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-hg8v8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce810ee2-61f4-4f98-bbf2-63e5bc94187d,},Annotations:map[string]string{io.kubernetes.container.hash: 50a7a390,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6e7b6176a50e87c9b6d5d32d793be62638484ebcc3093e045c24ba898b4d3a4,PodSandboxId:2b8a126e6b4f7c67d76e9158b825b0687f01ce5856b7a21d04552f38cce04bae,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1740402290274729414,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 4966d82c-766b-4643-8328-43003ef09cb1,},Annotations:map[string]string{io.kubernetes.container.hash: 1d03e370,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ada187e00b4a666e4323a88e8f6dfc1df141b59b2f89fc383da5d66a3fffdac,PodSandboxId:ce11331f6b39d5bd7f3b41e39468a51d8d429aa58ad753286b66dce2782fbcaf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1740402289752876918,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jnpzp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4
beeb46-3ac3-4062-a8a3-9d97177b03a8,},Annotations:map[string]string{io.kubernetes.container.hash: d640a0af,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74d3b5f060cf9de6048efbed38efca85a5af53aa086496e552ec35a1a6fe02e8,PodSandboxId:257a7f15ca117a9b5136a5a290710530c52d4bcabc6bd96f34a42b03991cf3a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1740402283715698494,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-993368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36c374b9ef72b8360fccf87e5835c304,},Anno
tations:map[string]string{io.kubernetes.container.hash: a6470d0d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1412cb3e8167d9e849bcb9f7d81b3c0e619cc4a963d8405b25ccc85e7bde4c8e,PodSandboxId:b0c8227c9cc2478d12f5111491c9b7510d33baa1b549d936bb28da30785dc9f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1740402283759349339,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-993368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4605010d54ad32574b3c7ecac5bf2192,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9e7235e8313c66f581e1f614ac74b150d3222b6c3ca35b07afe711887faa3b8,PodSandboxId:4857d0d9d107d6b1b4b4e8cdfc1134204ac93586c2849240af87cb0d04b051a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1740402283688876003,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-993368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 148acd0546e45981d779f002e95ac45a,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 85887793,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49da579e64dd782d062b1f67641597c573e5fb0f252a6f1151ff68ad2251763e,PodSandboxId:01f0adae4d6844d69b5e7004742d5be32100793b123f3f56a96a22abec8e1a87,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1740402283670014433,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-993368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 608e10b8bddf518fc47e61093f4f0d4f,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ef83c68a-085d-46e5-9c7b-5a2c02d62565 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:05:04 test-preload-993368 crio[670]: time="2025-02-24 13:05:04.886929988Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f91fbfdb-4cb3-453b-8a42-cacbe1675c9c name=/runtime.v1.RuntimeService/Version
	Feb 24 13:05:04 test-preload-993368 crio[670]: time="2025-02-24 13:05:04.887005584Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f91fbfdb-4cb3-453b-8a42-cacbe1675c9c name=/runtime.v1.RuntimeService/Version
	Feb 24 13:05:04 test-preload-993368 crio[670]: time="2025-02-24 13:05:04.888431027Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=267b8df6-3618-47e5-b8f5-80e7fb4ba2f5 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 13:05:04 test-preload-993368 crio[670]: time="2025-02-24 13:05:04.888876390Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1740402304888855514,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=267b8df6-3618-47e5-b8f5-80e7fb4ba2f5 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 13:05:04 test-preload-993368 crio[670]: time="2025-02-24 13:05:04.889488338Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a0061d10-992d-41f2-8410-7580336bc2a2 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:05:04 test-preload-993368 crio[670]: time="2025-02-24 13:05:04.889568792Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a0061d10-992d-41f2-8410-7580336bc2a2 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:05:04 test-preload-993368 crio[670]: time="2025-02-24 13:05:04.889745502Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4f98a313df4b0155c42bfcfc4560c6d49ca7dce36f76dbca43cff2b00de7d662,PodSandboxId:128866625b44acba61ca1b88ffcfd1b123a6e3d24112697408d6523b2cf7bff7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1740402297245422353,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-hg8v8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce810ee2-61f4-4f98-bbf2-63e5bc94187d,},Annotations:map[string]string{io.kubernetes.container.hash: 50a7a390,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6e7b6176a50e87c9b6d5d32d793be62638484ebcc3093e045c24ba898b4d3a4,PodSandboxId:2b8a126e6b4f7c67d76e9158b825b0687f01ce5856b7a21d04552f38cce04bae,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1740402290274729414,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 4966d82c-766b-4643-8328-43003ef09cb1,},Annotations:map[string]string{io.kubernetes.container.hash: 1d03e370,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ada187e00b4a666e4323a88e8f6dfc1df141b59b2f89fc383da5d66a3fffdac,PodSandboxId:ce11331f6b39d5bd7f3b41e39468a51d8d429aa58ad753286b66dce2782fbcaf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1740402289752876918,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jnpzp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4
beeb46-3ac3-4062-a8a3-9d97177b03a8,},Annotations:map[string]string{io.kubernetes.container.hash: d640a0af,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74d3b5f060cf9de6048efbed38efca85a5af53aa086496e552ec35a1a6fe02e8,PodSandboxId:257a7f15ca117a9b5136a5a290710530c52d4bcabc6bd96f34a42b03991cf3a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1740402283715698494,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-993368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36c374b9ef72b8360fccf87e5835c304,},Anno
tations:map[string]string{io.kubernetes.container.hash: a6470d0d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1412cb3e8167d9e849bcb9f7d81b3c0e619cc4a963d8405b25ccc85e7bde4c8e,PodSandboxId:b0c8227c9cc2478d12f5111491c9b7510d33baa1b549d936bb28da30785dc9f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1740402283759349339,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-993368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4605010d54ad32574b3c7ecac5bf2192,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9e7235e8313c66f581e1f614ac74b150d3222b6c3ca35b07afe711887faa3b8,PodSandboxId:4857d0d9d107d6b1b4b4e8cdfc1134204ac93586c2849240af87cb0d04b051a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1740402283688876003,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-993368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 148acd0546e45981d779f002e95ac45a,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 85887793,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49da579e64dd782d062b1f67641597c573e5fb0f252a6f1151ff68ad2251763e,PodSandboxId:01f0adae4d6844d69b5e7004742d5be32100793b123f3f56a96a22abec8e1a87,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1740402283670014433,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-993368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 608e10b8bddf518fc47e61093f4f0d4f,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a0061d10-992d-41f2-8410-7580336bc2a2 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:05:04 test-preload-993368 crio[670]: time="2025-02-24 13:05:04.925670971Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8b1a297a-7a11-4107-b85a-6761be922df5 name=/runtime.v1.RuntimeService/Version
	Feb 24 13:05:04 test-preload-993368 crio[670]: time="2025-02-24 13:05:04.925758985Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8b1a297a-7a11-4107-b85a-6761be922df5 name=/runtime.v1.RuntimeService/Version
	Feb 24 13:05:04 test-preload-993368 crio[670]: time="2025-02-24 13:05:04.926873394Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f0178f4f-e710-4283-b8a8-744458af940a name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 13:05:04 test-preload-993368 crio[670]: time="2025-02-24 13:05:04.927371297Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1740402304927347669,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f0178f4f-e710-4283-b8a8-744458af940a name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 13:05:04 test-preload-993368 crio[670]: time="2025-02-24 13:05:04.928088292Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c4e46193-f3db-44dc-a6ed-ec090480e837 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:05:04 test-preload-993368 crio[670]: time="2025-02-24 13:05:04.928280247Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c4e46193-f3db-44dc-a6ed-ec090480e837 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:05:04 test-preload-993368 crio[670]: time="2025-02-24 13:05:04.928457776Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4f98a313df4b0155c42bfcfc4560c6d49ca7dce36f76dbca43cff2b00de7d662,PodSandboxId:128866625b44acba61ca1b88ffcfd1b123a6e3d24112697408d6523b2cf7bff7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1740402297245422353,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-hg8v8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce810ee2-61f4-4f98-bbf2-63e5bc94187d,},Annotations:map[string]string{io.kubernetes.container.hash: 50a7a390,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6e7b6176a50e87c9b6d5d32d793be62638484ebcc3093e045c24ba898b4d3a4,PodSandboxId:2b8a126e6b4f7c67d76e9158b825b0687f01ce5856b7a21d04552f38cce04bae,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1740402290274729414,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 4966d82c-766b-4643-8328-43003ef09cb1,},Annotations:map[string]string{io.kubernetes.container.hash: 1d03e370,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ada187e00b4a666e4323a88e8f6dfc1df141b59b2f89fc383da5d66a3fffdac,PodSandboxId:ce11331f6b39d5bd7f3b41e39468a51d8d429aa58ad753286b66dce2782fbcaf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1740402289752876918,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jnpzp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4
beeb46-3ac3-4062-a8a3-9d97177b03a8,},Annotations:map[string]string{io.kubernetes.container.hash: d640a0af,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74d3b5f060cf9de6048efbed38efca85a5af53aa086496e552ec35a1a6fe02e8,PodSandboxId:257a7f15ca117a9b5136a5a290710530c52d4bcabc6bd96f34a42b03991cf3a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1740402283715698494,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-993368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36c374b9ef72b8360fccf87e5835c304,},Anno
tations:map[string]string{io.kubernetes.container.hash: a6470d0d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1412cb3e8167d9e849bcb9f7d81b3c0e619cc4a963d8405b25ccc85e7bde4c8e,PodSandboxId:b0c8227c9cc2478d12f5111491c9b7510d33baa1b549d936bb28da30785dc9f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1740402283759349339,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-993368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4605010d54ad32574b3c7ecac5bf2192,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9e7235e8313c66f581e1f614ac74b150d3222b6c3ca35b07afe711887faa3b8,PodSandboxId:4857d0d9d107d6b1b4b4e8cdfc1134204ac93586c2849240af87cb0d04b051a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1740402283688876003,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-993368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 148acd0546e45981d779f002e95ac45a,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 85887793,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49da579e64dd782d062b1f67641597c573e5fb0f252a6f1151ff68ad2251763e,PodSandboxId:01f0adae4d6844d69b5e7004742d5be32100793b123f3f56a96a22abec8e1a87,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1740402283670014433,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-993368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 608e10b8bddf518fc47e61093f4f0d4f,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c4e46193-f3db-44dc-a6ed-ec090480e837 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4f98a313df4b0       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   7 seconds ago       Running             coredns                   1                   128866625b44a       coredns-6d4b75cb6d-hg8v8
	e6e7b6176a50e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       1                   2b8a126e6b4f7       storage-provisioner
	7ada187e00b4a       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   15 seconds ago      Running             kube-proxy                1                   ce11331f6b39d       kube-proxy-jnpzp
	1412cb3e8167d       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   21 seconds ago      Running             kube-scheduler            1                   b0c8227c9cc24       kube-scheduler-test-preload-993368
	74d3b5f060cf9       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   21 seconds ago      Running             etcd                      1                   257a7f15ca117       etcd-test-preload-993368
	a9e7235e8313c       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   21 seconds ago      Running             kube-apiserver            1                   4857d0d9d107d       kube-apiserver-test-preload-993368
	49da579e64dd7       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   21 seconds ago      Running             kube-controller-manager   1                   01f0adae4d684       kube-controller-manager-test-preload-993368
	
	
	==> coredns [4f98a313df4b0155c42bfcfc4560c6d49ca7dce36f76dbca43cff2b00de7d662] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:50214 - 60265 "HINFO IN 6489334042382865082.2541988562966258594. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010306955s
	
	
	==> describe nodes <==
	Name:               test-preload-993368
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-993368
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b76650f53499dbb51707efa4a87e94b72d747650
	                    minikube.k8s.io/name=test-preload-993368
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_24T13_01_22_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Feb 2025 13:01:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-993368
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Feb 2025 13:04:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Feb 2025 13:04:58 +0000   Mon, 24 Feb 2025 13:01:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Feb 2025 13:04:58 +0000   Mon, 24 Feb 2025 13:01:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Feb 2025 13:04:58 +0000   Mon, 24 Feb 2025 13:01:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Feb 2025 13:04:58 +0000   Mon, 24 Feb 2025 13:04:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.199
	  Hostname:    test-preload-993368
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 70430e05a6df458d9e3332d0d08d5e27
	  System UUID:                70430e05-a6df-458d-9e33-32d0d08d5e27
	  Boot ID:                    a6ef5456-d276-4b95-b9c9-c861f7f91e28
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-hg8v8                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     3m31s
	  kube-system                 etcd-test-preload-993368                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         3m43s
	  kube-system                 kube-apiserver-test-preload-993368             250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m45s
	  kube-system                 kube-controller-manager-test-preload-993368    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m43s
	  kube-system                 kube-proxy-jnpzp                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m31s
	  kube-system                 kube-scheduler-test-preload-993368             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m43s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 15s                    kube-proxy       
	  Normal  Starting                 3m28s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m51s (x4 over 3m51s)  kubelet          Node test-preload-993368 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m51s (x4 over 3m51s)  kubelet          Node test-preload-993368 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m51s (x4 over 3m51s)  kubelet          Node test-preload-993368 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m43s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m43s                  kubelet          Node test-preload-993368 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m43s                  kubelet          Node test-preload-993368 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m43s                  kubelet          Node test-preload-993368 status is now: NodeHasSufficientPID
	  Normal  NodeReady                3m33s                  kubelet          Node test-preload-993368 status is now: NodeReady
	  Normal  RegisteredNode           3m32s                  node-controller  Node test-preload-993368 event: Registered Node test-preload-993368 in Controller
	  Normal  Starting                 23s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 23s)      kubelet          Node test-preload-993368 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 23s)      kubelet          Node test-preload-993368 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 23s)      kubelet          Node test-preload-993368 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5s                     node-controller  Node test-preload-993368 event: Registered Node test-preload-993368 in Controller
	
	
	==> dmesg <==
	[Feb24 13:04] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053448] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043142] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.014913] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.909188] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.634471] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.820272] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.062964] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.069022] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.200995] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.125725] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.292902] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[ +13.163741] systemd-fstab-generator[990]: Ignoring "noauto" option for root device
	[  +0.057550] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.848751] systemd-fstab-generator[1117]: Ignoring "noauto" option for root device
	[  +5.672467] kauditd_printk_skb: 105 callbacks suppressed
	[  +2.484480] systemd-fstab-generator[1767]: Ignoring "noauto" option for root device
	[  +6.132504] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [74d3b5f060cf9de6048efbed38efca85a5af53aa086496e552ec35a1a6fe02e8] <==
	{"level":"info","ts":"2025-02-24T13:04:44.221Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"adf16ee9d395f7b5","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2025-02-24T13:04:44.222Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-02-24T13:04:44.223Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"adf16ee9d395f7b5 switched to configuration voters=(12533921188505057205)"}
	{"level":"info","ts":"2025-02-24T13:04:44.224Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"beb078c6af941210","local-member-id":"adf16ee9d395f7b5","added-peer-id":"adf16ee9d395f7b5","added-peer-peer-urls":["https://192.168.39.199:2380"]}
	{"level":"info","ts":"2025-02-24T13:04:44.224Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"beb078c6af941210","local-member-id":"adf16ee9d395f7b5","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-24T13:04:44.224Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-24T13:04:44.233Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.199:2380"}
	{"level":"info","ts":"2025-02-24T13:04:44.233Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.199:2380"}
	{"level":"info","ts":"2025-02-24T13:04:44.233Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-02-24T13:04:44.234Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-02-24T13:04:44.234Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"adf16ee9d395f7b5","initial-advertise-peer-urls":["https://192.168.39.199:2380"],"listen-peer-urls":["https://192.168.39.199:2380"],"advertise-client-urls":["https://192.168.39.199:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.199:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-02-24T13:04:45.769Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"adf16ee9d395f7b5 is starting a new election at term 2"}
	{"level":"info","ts":"2025-02-24T13:04:45.769Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"adf16ee9d395f7b5 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-02-24T13:04:45.769Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"adf16ee9d395f7b5 received MsgPreVoteResp from adf16ee9d395f7b5 at term 2"}
	{"level":"info","ts":"2025-02-24T13:04:45.769Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"adf16ee9d395f7b5 became candidate at term 3"}
	{"level":"info","ts":"2025-02-24T13:04:45.769Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"adf16ee9d395f7b5 received MsgVoteResp from adf16ee9d395f7b5 at term 3"}
	{"level":"info","ts":"2025-02-24T13:04:45.769Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"adf16ee9d395f7b5 became leader at term 3"}
	{"level":"info","ts":"2025-02-24T13:04:45.769Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: adf16ee9d395f7b5 elected leader adf16ee9d395f7b5 at term 3"}
	{"level":"info","ts":"2025-02-24T13:04:45.769Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"adf16ee9d395f7b5","local-member-attributes":"{Name:test-preload-993368 ClientURLs:[https://192.168.39.199:2379]}","request-path":"/0/members/adf16ee9d395f7b5/attributes","cluster-id":"beb078c6af941210","publish-timeout":"7s"}
	{"level":"info","ts":"2025-02-24T13:04:45.769Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-24T13:04:45.771Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-02-24T13:04:45.771Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-02-24T13:04:45.771Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-24T13:04:45.772Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.199:2379"}
	{"level":"info","ts":"2025-02-24T13:04:45.772Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 13:05:05 up 0 min,  0 users,  load average: 0.57, 0.17, 0.06
	Linux test-preload-993368 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a9e7235e8313c66f581e1f614ac74b150d3222b6c3ca35b07afe711887faa3b8] <==
	I0224 13:04:48.257101       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0224 13:04:48.257190       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0224 13:04:48.266674       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0224 13:04:48.266756       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0224 13:04:48.266863       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0224 13:04:48.284355       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0224 13:04:48.317339       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0224 13:04:48.318825       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0224 13:04:48.320344       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0224 13:04:48.326522       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0224 13:04:48.329195       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E0224 13:04:48.337532       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0224 13:04:48.366896       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0224 13:04:48.414860       1 cache.go:39] Caches are synced for autoregister controller
	I0224 13:04:48.419622       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0224 13:04:48.902534       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0224 13:04:49.216679       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0224 13:04:49.743577       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0224 13:04:49.758368       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0224 13:04:49.860875       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0224 13:04:49.881947       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0224 13:04:49.888877       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0224 13:04:50.147525       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0224 13:05:00.705755       1 controller.go:611] quota admission added evaluator for: endpoints
	I0224 13:05:00.865998       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [49da579e64dd782d062b1f67641597c573e5fb0f252a6f1151ff68ad2251763e] <==
	I0224 13:05:00.816485       1 shared_informer.go:262] Caches are synced for taint
	I0224 13:05:00.816774       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0224 13:05:00.817463       1 event.go:294] "Event occurred" object="test-preload-993368" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-993368 event: Registered Node test-preload-993368 in Controller"
	I0224 13:05:00.817537       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0224 13:05:00.817774       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-993368. Assuming now as a timestamp.
	I0224 13:05:00.817949       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0224 13:05:00.819375       1 shared_informer.go:262] Caches are synced for node
	I0224 13:05:00.819491       1 range_allocator.go:173] Starting range CIDR allocator
	I0224 13:05:00.819496       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0224 13:05:00.819504       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0224 13:05:00.821542       1 shared_informer.go:262] Caches are synced for daemon sets
	I0224 13:05:00.850622       1 shared_informer.go:262] Caches are synced for expand
	I0224 13:05:00.856005       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0224 13:05:00.877566       1 shared_informer.go:262] Caches are synced for GC
	I0224 13:05:00.888902       1 shared_informer.go:262] Caches are synced for TTL
	I0224 13:05:00.920830       1 shared_informer.go:262] Caches are synced for persistent volume
	I0224 13:05:00.925396       1 shared_informer.go:262] Caches are synced for attach detach
	I0224 13:05:00.930208       1 shared_informer.go:262] Caches are synced for ephemeral
	I0224 13:05:00.931373       1 shared_informer.go:262] Caches are synced for stateful set
	I0224 13:05:00.932563       1 shared_informer.go:262] Caches are synced for PVC protection
	I0224 13:05:00.964584       1 shared_informer.go:262] Caches are synced for resource quota
	I0224 13:05:00.970203       1 shared_informer.go:262] Caches are synced for resource quota
	I0224 13:05:01.363737       1 shared_informer.go:262] Caches are synced for garbage collector
	I0224 13:05:01.363870       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0224 13:05:01.398380       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [7ada187e00b4a666e4323a88e8f6dfc1df141b59b2f89fc383da5d66a3fffdac] <==
	I0224 13:04:50.099272       1 node.go:163] Successfully retrieved node IP: 192.168.39.199
	I0224 13:04:50.099455       1 server_others.go:138] "Detected node IP" address="192.168.39.199"
	I0224 13:04:50.099532       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0224 13:04:50.138407       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0224 13:04:50.138423       1 server_others.go:206] "Using iptables Proxier"
	I0224 13:04:50.138986       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0224 13:04:50.140090       1 server.go:661] "Version info" version="v1.24.4"
	I0224 13:04:50.140281       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0224 13:04:50.142325       1 config.go:317] "Starting service config controller"
	I0224 13:04:50.142629       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0224 13:04:50.142656       1 config.go:444] "Starting node config controller"
	I0224 13:04:50.142660       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0224 13:04:50.146966       1 config.go:226] "Starting endpoint slice config controller"
	I0224 13:04:50.147055       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0224 13:04:50.243230       1 shared_informer.go:262] Caches are synced for node config
	I0224 13:04:50.243300       1 shared_informer.go:262] Caches are synced for service config
	I0224 13:04:50.253292       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [1412cb3e8167d9e849bcb9f7d81b3c0e619cc4a963d8405b25ccc85e7bde4c8e] <==
	I0224 13:04:45.049968       1 serving.go:348] Generated self-signed cert in-memory
	W0224 13:04:48.308781       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0224 13:04:48.308970       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0224 13:04:48.308997       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0224 13:04:48.309015       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0224 13:04:48.336971       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0224 13:04:48.337008       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0224 13:04:48.348269       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0224 13:04:48.349011       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0224 13:04:48.349065       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0224 13:04:48.349097       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0224 13:04:48.450053       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 24 13:04:48 test-preload-993368 kubelet[1124]: I0224 13:04:48.388995    1124 kubelet_node_status.go:73] "Successfully registered node" node="test-preload-993368"
	Feb 24 13:04:48 test-preload-993368 kubelet[1124]: I0224 13:04:48.391794    1124 setters.go:532] "Node became not ready" node="test-preload-993368" condition={Type:Ready Status:False LastHeartbeatTime:2025-02-24 13:04:48.391717896 +0000 UTC m=+5.597859046 LastTransitionTime:2025-02-24 13:04:48.391717896 +0000 UTC m=+5.597859046 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?}
	Feb 24 13:04:48 test-preload-993368 kubelet[1124]: I0224 13:04:48.960317    1124 apiserver.go:52] "Watching apiserver"
	Feb 24 13:04:48 test-preload-993368 kubelet[1124]: I0224 13:04:48.973365    1124 topology_manager.go:200] "Topology Admit Handler"
	Feb 24 13:04:48 test-preload-993368 kubelet[1124]: I0224 13:04:48.973493    1124 topology_manager.go:200] "Topology Admit Handler"
	Feb 24 13:04:48 test-preload-993368 kubelet[1124]: I0224 13:04:48.973532    1124 topology_manager.go:200] "Topology Admit Handler"
	Feb 24 13:04:48 test-preload-993368 kubelet[1124]: E0224 13:04:48.975277    1124 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-hg8v8" podUID=ce810ee2-61f4-4f98-bbf2-63e5bc94187d
	Feb 24 13:04:49 test-preload-993368 kubelet[1124]: I0224 13:04:49.031907    1124 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4beeb46-3ac3-4062-a8a3-9d97177b03a8-xtables-lock\") pod \"kube-proxy-jnpzp\" (UID: \"f4beeb46-3ac3-4062-a8a3-9d97177b03a8\") " pod="kube-system/kube-proxy-jnpzp"
	Feb 24 13:04:49 test-preload-993368 kubelet[1124]: I0224 13:04:49.032732    1124 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzxw6\" (UniqueName: \"kubernetes.io/projected/f4beeb46-3ac3-4062-a8a3-9d97177b03a8-kube-api-access-gzxw6\") pod \"kube-proxy-jnpzp\" (UID: \"f4beeb46-3ac3-4062-a8a3-9d97177b03a8\") " pod="kube-system/kube-proxy-jnpzp"
	Feb 24 13:04:49 test-preload-993368 kubelet[1124]: I0224 13:04:49.032932    1124 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4966d82c-766b-4643-8328-43003ef09cb1-tmp\") pod \"storage-provisioner\" (UID: \"4966d82c-766b-4643-8328-43003ef09cb1\") " pod="kube-system/storage-provisioner"
	Feb 24 13:04:49 test-preload-993368 kubelet[1124]: I0224 13:04:49.033223    1124 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgjnw\" (UniqueName: \"kubernetes.io/projected/ce810ee2-61f4-4f98-bbf2-63e5bc94187d-kube-api-access-wgjnw\") pod \"coredns-6d4b75cb6d-hg8v8\" (UID: \"ce810ee2-61f4-4f98-bbf2-63e5bc94187d\") " pod="kube-system/coredns-6d4b75cb6d-hg8v8"
	Feb 24 13:04:49 test-preload-993368 kubelet[1124]: I0224 13:04:49.033382    1124 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wklp\" (UniqueName: \"kubernetes.io/projected/4966d82c-766b-4643-8328-43003ef09cb1-kube-api-access-9wklp\") pod \"storage-provisioner\" (UID: \"4966d82c-766b-4643-8328-43003ef09cb1\") " pod="kube-system/storage-provisioner"
	Feb 24 13:04:49 test-preload-993368 kubelet[1124]: I0224 13:04:49.033516    1124 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f4beeb46-3ac3-4062-a8a3-9d97177b03a8-kube-proxy\") pod \"kube-proxy-jnpzp\" (UID: \"f4beeb46-3ac3-4062-a8a3-9d97177b03a8\") " pod="kube-system/kube-proxy-jnpzp"
	Feb 24 13:04:49 test-preload-993368 kubelet[1124]: I0224 13:04:49.033658    1124 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4beeb46-3ac3-4062-a8a3-9d97177b03a8-lib-modules\") pod \"kube-proxy-jnpzp\" (UID: \"f4beeb46-3ac3-4062-a8a3-9d97177b03a8\") " pod="kube-system/kube-proxy-jnpzp"
	Feb 24 13:04:49 test-preload-993368 kubelet[1124]: I0224 13:04:49.033789    1124 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce810ee2-61f4-4f98-bbf2-63e5bc94187d-config-volume\") pod \"coredns-6d4b75cb6d-hg8v8\" (UID: \"ce810ee2-61f4-4f98-bbf2-63e5bc94187d\") " pod="kube-system/coredns-6d4b75cb6d-hg8v8"
	Feb 24 13:04:49 test-preload-993368 kubelet[1124]: I0224 13:04:49.033916    1124 reconciler.go:159] "Reconciler: start to sync state"
	Feb 24 13:04:49 test-preload-993368 kubelet[1124]: E0224 13:04:49.136105    1124 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Feb 24 13:04:49 test-preload-993368 kubelet[1124]: E0224 13:04:49.136291    1124 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/ce810ee2-61f4-4f98-bbf2-63e5bc94187d-config-volume podName:ce810ee2-61f4-4f98-bbf2-63e5bc94187d nodeName:}" failed. No retries permitted until 2025-02-24 13:04:49.63624422 +0000 UTC m=+6.842385386 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ce810ee2-61f4-4f98-bbf2-63e5bc94187d-config-volume") pod "coredns-6d4b75cb6d-hg8v8" (UID: "ce810ee2-61f4-4f98-bbf2-63e5bc94187d") : object "kube-system"/"coredns" not registered
	Feb 24 13:04:49 test-preload-993368 kubelet[1124]: E0224 13:04:49.640691    1124 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Feb 24 13:04:49 test-preload-993368 kubelet[1124]: E0224 13:04:49.640777    1124 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/ce810ee2-61f4-4f98-bbf2-63e5bc94187d-config-volume podName:ce810ee2-61f4-4f98-bbf2-63e5bc94187d nodeName:}" failed. No retries permitted until 2025-02-24 13:04:50.640762507 +0000 UTC m=+7.846903658 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ce810ee2-61f4-4f98-bbf2-63e5bc94187d-config-volume") pod "coredns-6d4b75cb6d-hg8v8" (UID: "ce810ee2-61f4-4f98-bbf2-63e5bc94187d") : object "kube-system"/"coredns" not registered
	Feb 24 13:04:50 test-preload-993368 kubelet[1124]: E0224 13:04:50.651025    1124 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Feb 24 13:04:50 test-preload-993368 kubelet[1124]: E0224 13:04:50.651171    1124 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/ce810ee2-61f4-4f98-bbf2-63e5bc94187d-config-volume podName:ce810ee2-61f4-4f98-bbf2-63e5bc94187d nodeName:}" failed. No retries permitted until 2025-02-24 13:04:52.651100372 +0000 UTC m=+9.857241522 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ce810ee2-61f4-4f98-bbf2-63e5bc94187d-config-volume") pod "coredns-6d4b75cb6d-hg8v8" (UID: "ce810ee2-61f4-4f98-bbf2-63e5bc94187d") : object "kube-system"/"coredns" not registered
	Feb 24 13:04:51 test-preload-993368 kubelet[1124]: E0224 13:04:51.058261    1124 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-hg8v8" podUID=ce810ee2-61f4-4f98-bbf2-63e5bc94187d
	Feb 24 13:04:52 test-preload-993368 kubelet[1124]: E0224 13:04:52.665951    1124 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Feb 24 13:04:52 test-preload-993368 kubelet[1124]: E0224 13:04:52.666479    1124 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/ce810ee2-61f4-4f98-bbf2-63e5bc94187d-config-volume podName:ce810ee2-61f4-4f98-bbf2-63e5bc94187d nodeName:}" failed. No retries permitted until 2025-02-24 13:04:56.666458568 +0000 UTC m=+13.872599718 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ce810ee2-61f4-4f98-bbf2-63e5bc94187d-config-volume") pod "coredns-6d4b75cb6d-hg8v8" (UID: "ce810ee2-61f4-4f98-bbf2-63e5bc94187d") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [e6e7b6176a50e87c9b6d5d32d793be62638484ebcc3093e045c24ba898b4d3a4] <==
	I0224 13:04:50.397412       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-993368 -n test-preload-993368
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-993368 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-993368" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-993368
--- FAIL: TestPreload (298.22s)

                                                
                                    
x
+
TestKubernetesUpgrade (444.14s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-973775 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-973775 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m19.994116568s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-973775] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20451
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20451-887294/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-887294/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-973775" primary control-plane node in "kubernetes-upgrade-973775" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0224 13:08:08.132808  931984 out.go:345] Setting OutFile to fd 1 ...
	I0224 13:08:08.133155  931984 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 13:08:08.133170  931984 out.go:358] Setting ErrFile to fd 2...
	I0224 13:08:08.133178  931984 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 13:08:08.133535  931984 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-887294/.minikube/bin
	I0224 13:08:08.134341  931984 out.go:352] Setting JSON to false
	I0224 13:08:08.135736  931984 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10229,"bootTime":1740392259,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0224 13:08:08.135885  931984 start.go:139] virtualization: kvm guest
	I0224 13:08:08.214547  931984 out.go:177] * [kubernetes-upgrade-973775] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0224 13:08:08.319954  931984 out.go:177]   - MINIKUBE_LOCATION=20451
	I0224 13:08:08.319952  931984 notify.go:220] Checking for updates...
	I0224 13:08:08.452584  931984 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 13:08:08.621867  931984 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20451-887294/kubeconfig
	I0224 13:08:08.710316  931984 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-887294/.minikube
	I0224 13:08:08.777026  931984 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0224 13:08:08.812623  931984 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0224 13:08:08.936019  931984 config.go:182] Loaded profile config "NoKubernetes-248837": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0224 13:08:08.936164  931984 config.go:182] Loaded profile config "offline-crio-226975": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0224 13:08:08.936251  931984 config.go:182] Loaded profile config "running-upgrade-271664": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0224 13:08:08.936360  931984 driver.go:394] Setting default libvirt URI to qemu:///system
	I0224 13:08:09.056349  931984 out.go:177] * Using the kvm2 driver based on user configuration
	I0224 13:08:09.140233  931984 start.go:297] selected driver: kvm2
	I0224 13:08:09.140266  931984 start.go:901] validating driver "kvm2" against <nil>
	I0224 13:08:09.140287  931984 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0224 13:08:09.141396  931984 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 13:08:09.141513  931984 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20451-887294/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0224 13:08:09.158388  931984 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0224 13:08:09.158469  931984 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0224 13:08:09.158811  931984 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0224 13:08:09.158859  931984 cni.go:84] Creating CNI manager for ""
	I0224 13:08:09.158961  931984 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0224 13:08:09.158974  931984 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0224 13:08:09.159048  931984 start.go:340] cluster config:
	{Name:kubernetes-upgrade-973775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-973775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0224 13:08:09.159188  931984 iso.go:125] acquiring lock: {Name:mk57408cca66a96a13d93cda43cdfac6e61aef3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 13:08:09.182328  931984 out.go:177] * Starting "kubernetes-upgrade-973775" primary control-plane node in "kubernetes-upgrade-973775" cluster
	I0224 13:08:09.249337  931984 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0224 13:08:09.249450  931984 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20451-887294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0224 13:08:09.249469  931984 cache.go:56] Caching tarball of preloaded images
	I0224 13:08:09.249624  931984 preload.go:172] Found /home/jenkins/minikube-integration/20451-887294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0224 13:08:09.249641  931984 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0224 13:08:09.249779  931984 profile.go:143] Saving config to /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kubernetes-upgrade-973775/config.json ...
	I0224 13:08:09.249809  931984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kubernetes-upgrade-973775/config.json: {Name:mk1e8c22a43249acbd88d9511918e74a07659ad2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 13:08:09.250011  931984 start.go:360] acquireMachinesLock for kubernetes-upgrade-973775: {Name:mk023761b01bb629a1acd40bc8104cc517b0e15b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0224 13:08:53.570279  931984 start.go:364] duration metric: took 44.320213049s to acquireMachinesLock for "kubernetes-upgrade-973775"
	I0224 13:08:53.570365  931984 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-973775 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20
.0 ClusterName:kubernetes-upgrade-973775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0224 13:08:53.570550  931984 start.go:125] createHost starting for "" (driver="kvm2")
	I0224 13:08:53.572931  931984 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0224 13:08:53.573178  931984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:08:53.573255  931984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:08:53.590769  931984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40393
	I0224 13:08:53.591301  931984 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:08:53.591925  931984 main.go:141] libmachine: Using API Version  1
	I0224 13:08:53.591953  931984 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:08:53.592309  931984 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:08:53.592519  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetMachineName
	I0224 13:08:53.592702  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .DriverName
	I0224 13:08:53.592883  931984 start.go:159] libmachine.API.Create for "kubernetes-upgrade-973775" (driver="kvm2")
	I0224 13:08:53.592940  931984 client.go:168] LocalClient.Create starting
	I0224 13:08:53.592984  931984 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem
	I0224 13:08:53.593029  931984 main.go:141] libmachine: Decoding PEM data...
	I0224 13:08:53.593057  931984 main.go:141] libmachine: Parsing certificate...
	I0224 13:08:53.593138  931984 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20451-887294/.minikube/certs/cert.pem
	I0224 13:08:53.593170  931984 main.go:141] libmachine: Decoding PEM data...
	I0224 13:08:53.593213  931984 main.go:141] libmachine: Parsing certificate...
	I0224 13:08:53.593244  931984 main.go:141] libmachine: Running pre-create checks...
	I0224 13:08:53.593263  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .PreCreateCheck
	I0224 13:08:53.593668  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetConfigRaw
	I0224 13:08:53.594137  931984 main.go:141] libmachine: Creating machine...
	I0224 13:08:53.594153  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .Create
	I0224 13:08:53.594296  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) creating KVM machine...
	I0224 13:08:53.594317  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) creating network...
	I0224 13:08:53.595858  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | found existing default KVM network
	I0224 13:08:53.597118  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | I0224 13:08:53.596908  932602 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:d5:9c:7f} reservation:<nil>}
	I0224 13:08:53.598123  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | I0224 13:08:53.598024  932602 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002667a0}
	I0224 13:08:53.598154  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | created network xml: 
	I0224 13:08:53.598167  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | <network>
	I0224 13:08:53.598207  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG |   <name>mk-kubernetes-upgrade-973775</name>
	I0224 13:08:53.598221  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG |   <dns enable='no'/>
	I0224 13:08:53.598230  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG |   
	I0224 13:08:53.598240  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0224 13:08:53.598250  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG |     <dhcp>
	I0224 13:08:53.598259  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0224 13:08:53.598268  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG |     </dhcp>
	I0224 13:08:53.598279  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG |   </ip>
	I0224 13:08:53.598286  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG |   
	I0224 13:08:53.598316  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | </network>
	I0224 13:08:53.598339  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | 
	I0224 13:08:53.603877  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | trying to create private KVM network mk-kubernetes-upgrade-973775 192.168.50.0/24...
	I0224 13:08:53.682067  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | private KVM network mk-kubernetes-upgrade-973775 192.168.50.0/24 created
	I0224 13:08:53.682106  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | I0224 13:08:53.681981  932602 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20451-887294/.minikube
	I0224 13:08:53.682120  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) setting up store path in /home/jenkins/minikube-integration/20451-887294/.minikube/machines/kubernetes-upgrade-973775 ...
	I0224 13:08:53.682139  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) building disk image from file:///home/jenkins/minikube-integration/20451-887294/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0224 13:08:53.682161  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Downloading /home/jenkins/minikube-integration/20451-887294/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20451-887294/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0224 13:08:53.958356  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | I0224 13:08:53.958214  932602 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20451-887294/.minikube/machines/kubernetes-upgrade-973775/id_rsa...
	I0224 13:08:54.111700  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | I0224 13:08:54.111524  932602 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20451-887294/.minikube/machines/kubernetes-upgrade-973775/kubernetes-upgrade-973775.rawdisk...
	I0224 13:08:54.111733  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | Writing magic tar header
	I0224 13:08:54.111746  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | Writing SSH key tar header
	I0224 13:08:54.111754  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | I0224 13:08:54.111668  932602 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20451-887294/.minikube/machines/kubernetes-upgrade-973775 ...
	I0224 13:08:54.111840  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20451-887294/.minikube/machines/kubernetes-upgrade-973775
	I0224 13:08:54.111875  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) setting executable bit set on /home/jenkins/minikube-integration/20451-887294/.minikube/machines/kubernetes-upgrade-973775 (perms=drwx------)
	I0224 13:08:54.111892  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20451-887294/.minikube/machines
	I0224 13:08:54.111904  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) setting executable bit set on /home/jenkins/minikube-integration/20451-887294/.minikube/machines (perms=drwxr-xr-x)
	I0224 13:08:54.111921  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) setting executable bit set on /home/jenkins/minikube-integration/20451-887294/.minikube (perms=drwxr-xr-x)
	I0224 13:08:54.111927  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) setting executable bit set on /home/jenkins/minikube-integration/20451-887294 (perms=drwxrwxr-x)
	I0224 13:08:54.111938  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0224 13:08:54.111944  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0224 13:08:54.111952  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) creating domain...
	I0224 13:08:54.111968  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20451-887294/.minikube
	I0224 13:08:54.111986  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20451-887294
	I0224 13:08:54.112003  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0224 13:08:54.112015  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | checking permissions on dir: /home/jenkins
	I0224 13:08:54.112022  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | checking permissions on dir: /home
	I0224 13:08:54.112030  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | skipping /home - not owner
	I0224 13:08:54.113463  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) define libvirt domain using xml: 
	I0224 13:08:54.113488  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) <domain type='kvm'>
	I0224 13:08:54.113496  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)   <name>kubernetes-upgrade-973775</name>
	I0224 13:08:54.113505  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)   <memory unit='MiB'>2200</memory>
	I0224 13:08:54.113510  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)   <vcpu>2</vcpu>
	I0224 13:08:54.113515  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)   <features>
	I0224 13:08:54.113520  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)     <acpi/>
	I0224 13:08:54.113541  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)     <apic/>
	I0224 13:08:54.113547  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)     <pae/>
	I0224 13:08:54.113551  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)     
	I0224 13:08:54.113560  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)   </features>
	I0224 13:08:54.113565  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)   <cpu mode='host-passthrough'>
	I0224 13:08:54.113572  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)   
	I0224 13:08:54.113580  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)   </cpu>
	I0224 13:08:54.113585  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)   <os>
	I0224 13:08:54.113589  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)     <type>hvm</type>
	I0224 13:08:54.113594  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)     <boot dev='cdrom'/>
	I0224 13:08:54.113598  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)     <boot dev='hd'/>
	I0224 13:08:54.113610  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)     <bootmenu enable='no'/>
	I0224 13:08:54.113614  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)   </os>
	I0224 13:08:54.113618  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)   <devices>
	I0224 13:08:54.113623  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)     <disk type='file' device='cdrom'>
	I0224 13:08:54.113631  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)       <source file='/home/jenkins/minikube-integration/20451-887294/.minikube/machines/kubernetes-upgrade-973775/boot2docker.iso'/>
	I0224 13:08:54.113638  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)       <target dev='hdc' bus='scsi'/>
	I0224 13:08:54.113642  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)       <readonly/>
	I0224 13:08:54.113647  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)     </disk>
	I0224 13:08:54.113673  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)     <disk type='file' device='disk'>
	I0224 13:08:54.113700  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0224 13:08:54.113710  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)       <source file='/home/jenkins/minikube-integration/20451-887294/.minikube/machines/kubernetes-upgrade-973775/kubernetes-upgrade-973775.rawdisk'/>
	I0224 13:08:54.113721  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)       <target dev='hda' bus='virtio'/>
	I0224 13:08:54.113726  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)     </disk>
	I0224 13:08:54.113735  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)     <interface type='network'>
	I0224 13:08:54.113744  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)       <source network='mk-kubernetes-upgrade-973775'/>
	I0224 13:08:54.113762  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)       <model type='virtio'/>
	I0224 13:08:54.113770  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)     </interface>
	I0224 13:08:54.113782  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)     <interface type='network'>
	I0224 13:08:54.113791  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)       <source network='default'/>
	I0224 13:08:54.113796  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)       <model type='virtio'/>
	I0224 13:08:54.113802  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)     </interface>
	I0224 13:08:54.113810  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)     <serial type='pty'>
	I0224 13:08:54.113818  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)       <target port='0'/>
	I0224 13:08:54.113822  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)     </serial>
	I0224 13:08:54.113830  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)     <console type='pty'>
	I0224 13:08:54.113834  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)       <target type='serial' port='0'/>
	I0224 13:08:54.113842  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)     </console>
	I0224 13:08:54.113846  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)     <rng model='virtio'>
	I0224 13:08:54.113852  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)       <backend model='random'>/dev/random</backend>
	I0224 13:08:54.113857  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)     </rng>
	I0224 13:08:54.113864  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)     
	I0224 13:08:54.113870  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)     
	I0224 13:08:54.113875  931984 main.go:141] libmachine: (kubernetes-upgrade-973775)   </devices>
	I0224 13:08:54.113885  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) </domain>
	I0224 13:08:54.113915  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) 
	I0224 13:08:54.118622  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined MAC address 52:54:00:50:c4:94 in network default
	I0224 13:08:54.119228  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:08:54.119253  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) starting domain...
	I0224 13:08:54.119261  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) ensuring networks are active...
	I0224 13:08:54.119970  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Ensuring network default is active
	I0224 13:08:54.120373  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Ensuring network mk-kubernetes-upgrade-973775 is active
	I0224 13:08:54.120928  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) getting domain XML...
	I0224 13:08:54.121716  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) creating domain...
	I0224 13:08:55.362934  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) waiting for IP...
	I0224 13:08:55.363676  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:08:55.364081  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | unable to find current IP address of domain kubernetes-upgrade-973775 in network mk-kubernetes-upgrade-973775
	I0224 13:08:55.364116  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | I0224 13:08:55.364075  932602 retry.go:31] will retry after 302.620694ms: waiting for domain to come up
	I0224 13:08:55.668531  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:08:55.668938  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | unable to find current IP address of domain kubernetes-upgrade-973775 in network mk-kubernetes-upgrade-973775
	I0224 13:08:55.668977  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | I0224 13:08:55.668891  932602 retry.go:31] will retry after 292.780921ms: waiting for domain to come up
	I0224 13:08:55.963695  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:08:55.964254  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | unable to find current IP address of domain kubernetes-upgrade-973775 in network mk-kubernetes-upgrade-973775
	I0224 13:08:55.964279  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | I0224 13:08:55.964228  932602 retry.go:31] will retry after 424.123426ms: waiting for domain to come up
	I0224 13:08:56.389965  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:08:56.390444  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | unable to find current IP address of domain kubernetes-upgrade-973775 in network mk-kubernetes-upgrade-973775
	I0224 13:08:56.390534  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | I0224 13:08:56.390430  932602 retry.go:31] will retry after 424.911917ms: waiting for domain to come up
	I0224 13:08:56.817103  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:08:56.817643  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | unable to find current IP address of domain kubernetes-upgrade-973775 in network mk-kubernetes-upgrade-973775
	I0224 13:08:56.817671  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | I0224 13:08:56.817596  932602 retry.go:31] will retry after 703.792113ms: waiting for domain to come up
	I0224 13:08:57.523620  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:08:57.524086  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | unable to find current IP address of domain kubernetes-upgrade-973775 in network mk-kubernetes-upgrade-973775
	I0224 13:08:57.524114  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | I0224 13:08:57.524053  932602 retry.go:31] will retry after 573.645975ms: waiting for domain to come up
	I0224 13:08:58.099787  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:08:58.100311  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | unable to find current IP address of domain kubernetes-upgrade-973775 in network mk-kubernetes-upgrade-973775
	I0224 13:08:58.100390  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | I0224 13:08:58.100288  932602 retry.go:31] will retry after 1.088899988s: waiting for domain to come up
	I0224 13:08:59.191051  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:08:59.191625  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | unable to find current IP address of domain kubernetes-upgrade-973775 in network mk-kubernetes-upgrade-973775
	I0224 13:08:59.191659  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | I0224 13:08:59.191577  932602 retry.go:31] will retry after 1.202770018s: waiting for domain to come up
	I0224 13:09:00.395866  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:09:00.396435  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | unable to find current IP address of domain kubernetes-upgrade-973775 in network mk-kubernetes-upgrade-973775
	I0224 13:09:00.396465  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | I0224 13:09:00.396392  932602 retry.go:31] will retry after 1.710714106s: waiting for domain to come up
	I0224 13:09:02.108684  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:09:02.109219  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | unable to find current IP address of domain kubernetes-upgrade-973775 in network mk-kubernetes-upgrade-973775
	I0224 13:09:02.109293  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | I0224 13:09:02.109206  932602 retry.go:31] will retry after 1.770701109s: waiting for domain to come up
	I0224 13:09:03.881814  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:09:03.882544  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | unable to find current IP address of domain kubernetes-upgrade-973775 in network mk-kubernetes-upgrade-973775
	I0224 13:09:03.882582  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | I0224 13:09:03.882473  932602 retry.go:31] will retry after 2.286911065s: waiting for domain to come up
	I0224 13:09:06.171091  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:09:06.171590  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | unable to find current IP address of domain kubernetes-upgrade-973775 in network mk-kubernetes-upgrade-973775
	I0224 13:09:06.171624  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | I0224 13:09:06.171546  932602 retry.go:31] will retry after 3.114860334s: waiting for domain to come up
	I0224 13:09:09.288562  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:09:09.288999  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | unable to find current IP address of domain kubernetes-upgrade-973775 in network mk-kubernetes-upgrade-973775
	I0224 13:09:09.289031  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | I0224 13:09:09.288963  932602 retry.go:31] will retry after 3.640803469s: waiting for domain to come up
	I0224 13:09:12.933928  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:09:12.934450  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | unable to find current IP address of domain kubernetes-upgrade-973775 in network mk-kubernetes-upgrade-973775
	I0224 13:09:12.934511  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | I0224 13:09:12.934413  932602 retry.go:31] will retry after 5.659446011s: waiting for domain to come up
	I0224 13:09:18.596961  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:09:18.597603  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has current primary IP address 192.168.50.35 and MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:09:18.597632  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) found domain IP: 192.168.50.35
	I0224 13:09:18.597646  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) reserving static IP address...
	I0224 13:09:18.598062  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-973775", mac: "52:54:00:d8:44:47", ip: "192.168.50.35"} in network mk-kubernetes-upgrade-973775
	I0224 13:09:18.686727  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | Getting to WaitForSSH function...
	I0224 13:09:18.686767  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) reserved static IP address 192.168.50.35 for domain kubernetes-upgrade-973775
	I0224 13:09:18.686801  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) waiting for SSH...
	I0224 13:09:18.690267  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:09:18.690709  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:44:47", ip: ""} in network mk-kubernetes-upgrade-973775: {Iface:virbr2 ExpiryTime:2025-02-24 14:09:09 +0000 UTC Type:0 Mac:52:54:00:d8:44:47 Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d8:44:47}
	I0224 13:09:18.690748  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined IP address 192.168.50.35 and MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:09:18.690949  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | Using SSH client type: external
	I0224 13:09:18.690980  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | Using SSH private key: /home/jenkins/minikube-integration/20451-887294/.minikube/machines/kubernetes-upgrade-973775/id_rsa (-rw-------)
	I0224 13:09:18.691018  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.35 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20451-887294/.minikube/machines/kubernetes-upgrade-973775/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0224 13:09:18.691037  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | About to run SSH command:
	I0224 13:09:18.691056  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | exit 0
	I0224 13:09:18.829986  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | SSH cmd err, output: <nil>: 
	I0224 13:09:18.830268  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) KVM machine creation complete
	I0224 13:09:18.830613  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetConfigRaw
	I0224 13:09:18.831283  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .DriverName
	I0224 13:09:18.831538  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .DriverName
	I0224 13:09:18.831751  931984 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0224 13:09:18.831769  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetState
	I0224 13:09:18.833664  931984 main.go:141] libmachine: Detecting operating system of created instance...
	I0224 13:09:18.833682  931984 main.go:141] libmachine: Waiting for SSH to be available...
	I0224 13:09:18.833690  931984 main.go:141] libmachine: Getting to WaitForSSH function...
	I0224 13:09:18.833699  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHHostname
	I0224 13:09:18.836351  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:09:18.836778  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:44:47", ip: ""} in network mk-kubernetes-upgrade-973775: {Iface:virbr2 ExpiryTime:2025-02-24 14:09:09 +0000 UTC Type:0 Mac:52:54:00:d8:44:47 Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:kubernetes-upgrade-973775 Clientid:01:52:54:00:d8:44:47}
	I0224 13:09:18.836814  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined IP address 192.168.50.35 and MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:09:18.837019  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHPort
	I0224 13:09:18.837232  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHKeyPath
	I0224 13:09:18.837442  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHKeyPath
	I0224 13:09:18.837640  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHUsername
	I0224 13:09:18.837810  931984 main.go:141] libmachine: Using SSH client type: native
	I0224 13:09:18.838021  931984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.35 22 <nil> <nil>}
	I0224 13:09:18.838032  931984 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0224 13:09:18.952912  931984 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0224 13:09:18.952959  931984 main.go:141] libmachine: Detecting the provisioner...
	I0224 13:09:18.952985  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHHostname
	I0224 13:09:18.956595  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:09:18.957073  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:44:47", ip: ""} in network mk-kubernetes-upgrade-973775: {Iface:virbr2 ExpiryTime:2025-02-24 14:09:09 +0000 UTC Type:0 Mac:52:54:00:d8:44:47 Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:kubernetes-upgrade-973775 Clientid:01:52:54:00:d8:44:47}
	I0224 13:09:18.957105  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined IP address 192.168.50.35 and MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:09:18.957380  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHPort
	I0224 13:09:18.957621  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHKeyPath
	I0224 13:09:18.957790  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHKeyPath
	I0224 13:09:18.957980  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHUsername
	I0224 13:09:18.958161  931984 main.go:141] libmachine: Using SSH client type: native
	I0224 13:09:18.958368  931984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.35 22 <nil> <nil>}
	I0224 13:09:18.958382  931984 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0224 13:09:19.066614  931984 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0224 13:09:19.066710  931984 main.go:141] libmachine: found compatible host: buildroot
	I0224 13:09:19.066726  931984 main.go:141] libmachine: Provisioning with buildroot...
	I0224 13:09:19.066738  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetMachineName
	I0224 13:09:19.067055  931984 buildroot.go:166] provisioning hostname "kubernetes-upgrade-973775"
	I0224 13:09:19.067088  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetMachineName
	I0224 13:09:19.067396  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHHostname
	I0224 13:09:19.070833  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:09:19.071355  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:44:47", ip: ""} in network mk-kubernetes-upgrade-973775: {Iface:virbr2 ExpiryTime:2025-02-24 14:09:09 +0000 UTC Type:0 Mac:52:54:00:d8:44:47 Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:kubernetes-upgrade-973775 Clientid:01:52:54:00:d8:44:47}
	I0224 13:09:19.071392  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined IP address 192.168.50.35 and MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:09:19.071593  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHPort
	I0224 13:09:19.071802  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHKeyPath
	I0224 13:09:19.071969  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHKeyPath
	I0224 13:09:19.072148  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHUsername
	I0224 13:09:19.072374  931984 main.go:141] libmachine: Using SSH client type: native
	I0224 13:09:19.072574  931984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.35 22 <nil> <nil>}
	I0224 13:09:19.072587  931984 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-973775 && echo "kubernetes-upgrade-973775" | sudo tee /etc/hostname
	I0224 13:09:19.194048  931984 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-973775
	
	I0224 13:09:19.194094  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHHostname
	I0224 13:09:19.197606  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:09:19.198107  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:44:47", ip: ""} in network mk-kubernetes-upgrade-973775: {Iface:virbr2 ExpiryTime:2025-02-24 14:09:09 +0000 UTC Type:0 Mac:52:54:00:d8:44:47 Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:kubernetes-upgrade-973775 Clientid:01:52:54:00:d8:44:47}
	I0224 13:09:19.198143  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined IP address 192.168.50.35 and MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:09:19.198370  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHPort
	I0224 13:09:19.198588  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHKeyPath
	I0224 13:09:19.198772  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHKeyPath
	I0224 13:09:19.198942  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHUsername
	I0224 13:09:19.199110  931984 main.go:141] libmachine: Using SSH client type: native
	I0224 13:09:19.199319  931984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.35 22 <nil> <nil>}
	I0224 13:09:19.199341  931984 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-973775' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-973775/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-973775' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0224 13:09:19.320192  931984 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0224 13:09:19.320259  931984 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20451-887294/.minikube CaCertPath:/home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20451-887294/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20451-887294/.minikube}
	I0224 13:09:19.320321  931984 buildroot.go:174] setting up certificates
	I0224 13:09:19.320342  931984 provision.go:84] configureAuth start
	I0224 13:09:19.320357  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetMachineName
	I0224 13:09:19.320770  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetIP
	I0224 13:09:19.324028  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:09:19.324482  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:44:47", ip: ""} in network mk-kubernetes-upgrade-973775: {Iface:virbr2 ExpiryTime:2025-02-24 14:09:09 +0000 UTC Type:0 Mac:52:54:00:d8:44:47 Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:kubernetes-upgrade-973775 Clientid:01:52:54:00:d8:44:47}
	I0224 13:09:19.324516  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined IP address 192.168.50.35 and MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:09:19.324792  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHHostname
	I0224 13:09:19.327543  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:09:19.328006  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:44:47", ip: ""} in network mk-kubernetes-upgrade-973775: {Iface:virbr2 ExpiryTime:2025-02-24 14:09:09 +0000 UTC Type:0 Mac:52:54:00:d8:44:47 Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:kubernetes-upgrade-973775 Clientid:01:52:54:00:d8:44:47}
	I0224 13:09:19.328033  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined IP address 192.168.50.35 and MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:09:19.328164  931984 provision.go:143] copyHostCerts
	I0224 13:09:19.328259  931984 exec_runner.go:144] found /home/jenkins/minikube-integration/20451-887294/.minikube/cert.pem, removing ...
	I0224 13:09:19.328274  931984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20451-887294/.minikube/cert.pem
	I0224 13:09:19.328354  931984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20451-887294/.minikube/cert.pem (1123 bytes)
	I0224 13:09:19.328511  931984 exec_runner.go:144] found /home/jenkins/minikube-integration/20451-887294/.minikube/key.pem, removing ...
	I0224 13:09:19.328526  931984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20451-887294/.minikube/key.pem
	I0224 13:09:19.328561  931984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20451-887294/.minikube/key.pem (1679 bytes)
	I0224 13:09:19.328644  931984 exec_runner.go:144] found /home/jenkins/minikube-integration/20451-887294/.minikube/ca.pem, removing ...
	I0224 13:09:19.328654  931984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20451-887294/.minikube/ca.pem
	I0224 13:09:19.328681  931984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20451-887294/.minikube/ca.pem (1082 bytes)
	I0224 13:09:19.328759  931984 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20451-887294/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-973775 san=[127.0.0.1 192.168.50.35 kubernetes-upgrade-973775 localhost minikube]
	I0224 13:09:19.670906  931984 provision.go:177] copyRemoteCerts
	I0224 13:09:19.670974  931984 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0224 13:09:19.671021  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHHostname
	I0224 13:09:19.674206  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:09:19.674753  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:44:47", ip: ""} in network mk-kubernetes-upgrade-973775: {Iface:virbr2 ExpiryTime:2025-02-24 14:09:09 +0000 UTC Type:0 Mac:52:54:00:d8:44:47 Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:kubernetes-upgrade-973775 Clientid:01:52:54:00:d8:44:47}
	I0224 13:09:19.674796  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined IP address 192.168.50.35 and MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:09:19.674975  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHPort
	I0224 13:09:19.675223  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHKeyPath
	I0224 13:09:19.675454  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHUsername
	I0224 13:09:19.675660  931984 sshutil.go:53] new ssh client: &{IP:192.168.50.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/kubernetes-upgrade-973775/id_rsa Username:docker}
	I0224 13:09:19.760312  931984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0224 13:09:19.793455  931984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0224 13:09:19.820602  931984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0224 13:09:19.845600  931984 provision.go:87] duration metric: took 525.242408ms to configureAuth
	I0224 13:09:19.845632  931984 buildroot.go:189] setting minikube options for container-runtime
	I0224 13:09:19.845845  931984 config.go:182] Loaded profile config "kubernetes-upgrade-973775": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0224 13:09:19.845952  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHHostname
	I0224 13:09:19.848937  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:09:19.849495  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:44:47", ip: ""} in network mk-kubernetes-upgrade-973775: {Iface:virbr2 ExpiryTime:2025-02-24 14:09:09 +0000 UTC Type:0 Mac:52:54:00:d8:44:47 Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:kubernetes-upgrade-973775 Clientid:01:52:54:00:d8:44:47}
	I0224 13:09:19.849562  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined IP address 192.168.50.35 and MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:09:19.849847  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHPort
	I0224 13:09:19.850066  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHKeyPath
	I0224 13:09:19.850274  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHKeyPath
	I0224 13:09:19.850426  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHUsername
	I0224 13:09:19.850674  931984 main.go:141] libmachine: Using SSH client type: native
	I0224 13:09:19.850852  931984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.35 22 <nil> <nil>}
	I0224 13:09:19.850867  931984 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0224 13:09:20.098216  931984 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0224 13:09:20.098257  931984 main.go:141] libmachine: Checking connection to Docker...
	I0224 13:09:20.098269  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetURL
	I0224 13:09:20.099639  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | using libvirt version 6000000
	I0224 13:09:20.102137  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:09:20.102451  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:44:47", ip: ""} in network mk-kubernetes-upgrade-973775: {Iface:virbr2 ExpiryTime:2025-02-24 14:09:09 +0000 UTC Type:0 Mac:52:54:00:d8:44:47 Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:kubernetes-upgrade-973775 Clientid:01:52:54:00:d8:44:47}
	I0224 13:09:20.102488  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined IP address 192.168.50.35 and MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:09:20.102634  931984 main.go:141] libmachine: Docker is up and running!
	I0224 13:09:20.102649  931984 main.go:141] libmachine: Reticulating splines...
	I0224 13:09:20.102656  931984 client.go:171] duration metric: took 26.509704209s to LocalClient.Create
	I0224 13:09:20.102682  931984 start.go:167] duration metric: took 26.509800703s to libmachine.API.Create "kubernetes-upgrade-973775"
	I0224 13:09:20.102693  931984 start.go:293] postStartSetup for "kubernetes-upgrade-973775" (driver="kvm2")
	I0224 13:09:20.102702  931984 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0224 13:09:20.102720  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .DriverName
	I0224 13:09:20.102985  931984 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0224 13:09:20.103017  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHHostname
	I0224 13:09:20.105153  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:09:20.105488  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:44:47", ip: ""} in network mk-kubernetes-upgrade-973775: {Iface:virbr2 ExpiryTime:2025-02-24 14:09:09 +0000 UTC Type:0 Mac:52:54:00:d8:44:47 Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:kubernetes-upgrade-973775 Clientid:01:52:54:00:d8:44:47}
	I0224 13:09:20.105516  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined IP address 192.168.50.35 and MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:09:20.105709  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHPort
	I0224 13:09:20.105933  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHKeyPath
	I0224 13:09:20.106104  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHUsername
	I0224 13:09:20.106302  931984 sshutil.go:53] new ssh client: &{IP:192.168.50.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/kubernetes-upgrade-973775/id_rsa Username:docker}
	I0224 13:09:20.200022  931984 ssh_runner.go:195] Run: cat /etc/os-release
	I0224 13:09:20.206019  931984 info.go:137] Remote host: Buildroot 2023.02.9
	I0224 13:09:20.206061  931984 filesync.go:126] Scanning /home/jenkins/minikube-integration/20451-887294/.minikube/addons for local assets ...
	I0224 13:09:20.206150  931984 filesync.go:126] Scanning /home/jenkins/minikube-integration/20451-887294/.minikube/files for local assets ...
	I0224 13:09:20.206285  931984 filesync.go:149] local asset: /home/jenkins/minikube-integration/20451-887294/.minikube/files/etc/ssl/certs/8945642.pem -> 8945642.pem in /etc/ssl/certs
	I0224 13:09:20.206428  931984 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0224 13:09:20.217490  931984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/files/etc/ssl/certs/8945642.pem --> /etc/ssl/certs/8945642.pem (1708 bytes)
	I0224 13:09:20.249210  931984 start.go:296] duration metric: took 146.498901ms for postStartSetup
	I0224 13:09:20.249288  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetConfigRaw
	I0224 13:09:20.250134  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetIP
	I0224 13:09:20.252904  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:09:20.253293  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:44:47", ip: ""} in network mk-kubernetes-upgrade-973775: {Iface:virbr2 ExpiryTime:2025-02-24 14:09:09 +0000 UTC Type:0 Mac:52:54:00:d8:44:47 Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:kubernetes-upgrade-973775 Clientid:01:52:54:00:d8:44:47}
	I0224 13:09:20.253353  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined IP address 192.168.50.35 and MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:09:20.253586  931984 profile.go:143] Saving config to /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kubernetes-upgrade-973775/config.json ...
	I0224 13:09:20.253815  931984 start.go:128] duration metric: took 26.683247421s to createHost
	I0224 13:09:20.253848  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHHostname
	I0224 13:09:20.256238  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:09:20.256604  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:44:47", ip: ""} in network mk-kubernetes-upgrade-973775: {Iface:virbr2 ExpiryTime:2025-02-24 14:09:09 +0000 UTC Type:0 Mac:52:54:00:d8:44:47 Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:kubernetes-upgrade-973775 Clientid:01:52:54:00:d8:44:47}
	I0224 13:09:20.256635  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined IP address 192.168.50.35 and MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:09:20.256782  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHPort
	I0224 13:09:20.256940  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHKeyPath
	I0224 13:09:20.257116  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHKeyPath
	I0224 13:09:20.257344  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHUsername
	I0224 13:09:20.257535  931984 main.go:141] libmachine: Using SSH client type: native
	I0224 13:09:20.257760  931984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.35 22 <nil> <nil>}
	I0224 13:09:20.257778  931984 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0224 13:09:20.372625  931984 main.go:141] libmachine: SSH cmd err, output: <nil>: 1740402560.320942230
	
	I0224 13:09:20.372655  931984 fix.go:216] guest clock: 1740402560.320942230
	I0224 13:09:20.372666  931984 fix.go:229] Guest: 2025-02-24 13:09:20.32094223 +0000 UTC Remote: 2025-02-24 13:09:20.253831484 +0000 UTC m=+72.166698591 (delta=67.110746ms)
	I0224 13:09:20.372722  931984 fix.go:200] guest clock delta is within tolerance: 67.110746ms
	I0224 13:09:20.372730  931984 start.go:83] releasing machines lock for "kubernetes-upgrade-973775", held for 26.802402864s
	I0224 13:09:20.372770  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .DriverName
	I0224 13:09:20.373077  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetIP
	I0224 13:09:20.376639  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:09:20.377054  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:44:47", ip: ""} in network mk-kubernetes-upgrade-973775: {Iface:virbr2 ExpiryTime:2025-02-24 14:09:09 +0000 UTC Type:0 Mac:52:54:00:d8:44:47 Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:kubernetes-upgrade-973775 Clientid:01:52:54:00:d8:44:47}
	I0224 13:09:20.377089  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined IP address 192.168.50.35 and MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:09:20.377363  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .DriverName
	I0224 13:09:20.378213  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .DriverName
	I0224 13:09:20.378446  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .DriverName
	I0224 13:09:20.378548  931984 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0224 13:09:20.378607  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHHostname
	I0224 13:09:20.378804  931984 ssh_runner.go:195] Run: cat /version.json
	I0224 13:09:20.378828  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHHostname
	I0224 13:09:20.382081  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:09:20.382183  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:09:20.382616  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:44:47", ip: ""} in network mk-kubernetes-upgrade-973775: {Iface:virbr2 ExpiryTime:2025-02-24 14:09:09 +0000 UTC Type:0 Mac:52:54:00:d8:44:47 Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:kubernetes-upgrade-973775 Clientid:01:52:54:00:d8:44:47}
	I0224 13:09:20.382640  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined IP address 192.168.50.35 and MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:09:20.382770  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:44:47", ip: ""} in network mk-kubernetes-upgrade-973775: {Iface:virbr2 ExpiryTime:2025-02-24 14:09:09 +0000 UTC Type:0 Mac:52:54:00:d8:44:47 Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:kubernetes-upgrade-973775 Clientid:01:52:54:00:d8:44:47}
	I0224 13:09:20.382793  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined IP address 192.168.50.35 and MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:09:20.382838  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHPort
	I0224 13:09:20.383089  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHKeyPath
	I0224 13:09:20.383097  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHPort
	I0224 13:09:20.383289  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHKeyPath
	I0224 13:09:20.383293  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHUsername
	I0224 13:09:20.383498  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHUsername
	I0224 13:09:20.383487  931984 sshutil.go:53] new ssh client: &{IP:192.168.50.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/kubernetes-upgrade-973775/id_rsa Username:docker}
	I0224 13:09:20.383642  931984 sshutil.go:53] new ssh client: &{IP:192.168.50.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/kubernetes-upgrade-973775/id_rsa Username:docker}
	I0224 13:09:20.471416  931984 ssh_runner.go:195] Run: systemctl --version
	I0224 13:09:20.502385  931984 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0224 13:09:20.684197  931984 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0224 13:09:20.693587  931984 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0224 13:09:20.693678  931984 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0224 13:09:20.712807  931984 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0224 13:09:20.712844  931984 start.go:495] detecting cgroup driver to use...
	I0224 13:09:20.712924  931984 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0224 13:09:20.736273  931984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0224 13:09:20.757651  931984 docker.go:217] disabling cri-docker service (if available) ...
	I0224 13:09:20.757720  931984 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0224 13:09:20.781227  931984 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0224 13:09:20.804145  931984 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0224 13:09:20.989364  931984 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0224 13:09:21.175177  931984 docker.go:233] disabling docker service ...
	I0224 13:09:21.175267  931984 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0224 13:09:21.191834  931984 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0224 13:09:21.207425  931984 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0224 13:09:21.341683  931984 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0224 13:09:21.483322  931984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0224 13:09:21.500241  931984 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 13:09:21.522177  931984 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0224 13:09:21.522288  931984 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 13:09:21.534916  931984 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0224 13:09:21.534997  931984 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 13:09:21.547798  931984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 13:09:21.560160  931984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 13:09:21.573665  931984 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0224 13:09:21.591043  931984 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0224 13:09:21.605693  931984 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0224 13:09:21.605772  931984 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0224 13:09:21.628013  931984 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0224 13:09:21.643776  931984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 13:09:21.811004  931984 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0224 13:09:21.923410  931984 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0224 13:09:21.923497  931984 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0224 13:09:21.933121  931984 start.go:563] Will wait 60s for crictl version
	I0224 13:09:21.933200  931984 ssh_runner.go:195] Run: which crictl
	I0224 13:09:21.938671  931984 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0224 13:09:21.984652  931984 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0224 13:09:21.984742  931984 ssh_runner.go:195] Run: crio --version
	I0224 13:09:22.017494  931984 ssh_runner.go:195] Run: crio --version
	I0224 13:09:22.049393  931984 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0224 13:09:22.050704  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetIP
	I0224 13:09:22.054254  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:09:22.054769  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:44:47", ip: ""} in network mk-kubernetes-upgrade-973775: {Iface:virbr2 ExpiryTime:2025-02-24 14:09:09 +0000 UTC Type:0 Mac:52:54:00:d8:44:47 Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:kubernetes-upgrade-973775 Clientid:01:52:54:00:d8:44:47}
	I0224 13:09:22.054800  931984 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined IP address 192.168.50.35 and MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:09:22.055036  931984 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0224 13:09:22.060023  931984 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 13:09:22.074802  931984 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-973775 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kube
rnetes-upgrade-973775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.35 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0224 13:09:22.074925  931984 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0224 13:09:22.074989  931984 ssh_runner.go:195] Run: sudo crictl images --output json
	I0224 13:09:22.114443  931984 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0224 13:09:22.114535  931984 ssh_runner.go:195] Run: which lz4
	I0224 13:09:22.120726  931984 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0224 13:09:22.127632  931984 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0224 13:09:22.127672  931984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0224 13:09:24.028375  931984 crio.go:462] duration metric: took 1.907700119s to copy over tarball
	I0224 13:09:24.028477  931984 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0224 13:09:26.920090  931984 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.891567213s)
	I0224 13:09:26.920142  931984 crio.go:469] duration metric: took 2.891726849s to extract the tarball
	I0224 13:09:26.920152  931984 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0224 13:09:26.964876  931984 ssh_runner.go:195] Run: sudo crictl images --output json
	I0224 13:09:27.030598  931984 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0224 13:09:27.030640  931984 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0224 13:09:27.030716  931984 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0224 13:09:27.030743  931984 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0224 13:09:27.030729  931984 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0224 13:09:27.030820  931984 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0224 13:09:27.030748  931984 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0224 13:09:27.030786  931984 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0224 13:09:27.030792  931984 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0224 13:09:27.030817  931984 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0224 13:09:27.032480  931984 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0224 13:09:27.032512  931984 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0224 13:09:27.032513  931984 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0224 13:09:27.032478  931984 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0224 13:09:27.032489  931984 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0224 13:09:27.032491  931984 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0224 13:09:27.032570  931984 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0224 13:09:27.032742  931984 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0224 13:09:27.218055  931984 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0224 13:09:27.223234  931984 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0224 13:09:27.236236  931984 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0224 13:09:27.265521  931984 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0224 13:09:27.269677  931984 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0224 13:09:27.271158  931984 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0224 13:09:27.293619  931984 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0224 13:09:27.302214  931984 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0224 13:09:27.302291  931984 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0224 13:09:27.302353  931984 ssh_runner.go:195] Run: which crictl
	I0224 13:09:27.341144  931984 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0224 13:09:27.341216  931984 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0224 13:09:27.341276  931984 ssh_runner.go:195] Run: which crictl
	I0224 13:09:27.438436  931984 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0224 13:09:27.438525  931984 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0224 13:09:27.438559  931984 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0224 13:09:27.438612  931984 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0224 13:09:27.438637  931984 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0224 13:09:27.438646  931984 ssh_runner.go:195] Run: which crictl
	I0224 13:09:27.438686  931984 ssh_runner.go:195] Run: which crictl
	I0224 13:09:27.438567  931984 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0224 13:09:27.438757  931984 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0224 13:09:27.438796  931984 ssh_runner.go:195] Run: which crictl
	I0224 13:09:27.438808  931984 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0224 13:09:27.438851  931984 ssh_runner.go:195] Run: which crictl
	I0224 13:09:27.445840  931984 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0224 13:09:27.445875  931984 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0224 13:09:27.446193  931984 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0224 13:09:27.446239  931984 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0224 13:09:27.446278  931984 ssh_runner.go:195] Run: which crictl
	I0224 13:09:27.451991  931984 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0224 13:09:27.452053  931984 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0224 13:09:27.452068  931984 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0224 13:09:27.452107  931984 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0224 13:09:27.589181  931984 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0224 13:09:27.602523  931984 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0224 13:09:27.602568  931984 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0224 13:09:27.602623  931984 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0224 13:09:27.603348  931984 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0224 13:09:27.603423  931984 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0224 13:09:27.604312  931984 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0224 13:09:27.743321  931984 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0224 13:09:27.743321  931984 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0224 13:09:27.788723  931984 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0224 13:09:27.788807  931984 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0224 13:09:27.788733  931984 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0224 13:09:27.788782  931984 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0224 13:09:27.794044  931984 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0224 13:09:27.847630  931984 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20451-887294/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0224 13:09:27.855072  931984 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20451-887294/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0224 13:09:27.940866  931984 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20451-887294/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0224 13:09:27.940960  931984 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0224 13:09:27.940993  931984 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20451-887294/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0224 13:09:27.959041  931984 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20451-887294/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0224 13:09:27.959054  931984 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20451-887294/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0224 13:09:27.979955  931984 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20451-887294/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0224 13:09:28.205512  931984 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0224 13:09:28.356343  931984 cache_images.go:92] duration metric: took 1.325680825s to LoadCachedImages
	W0224 13:09:28.356455  931984 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20451-887294/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20451-887294/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0224 13:09:28.356480  931984 kubeadm.go:934] updating node { 192.168.50.35 8443 v1.20.0 crio true true} ...
	I0224 13:09:28.356608  931984 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-973775 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.35
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-973775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0224 13:09:28.356698  931984 ssh_runner.go:195] Run: crio config
	I0224 13:09:28.413386  931984 cni.go:84] Creating CNI manager for ""
	I0224 13:09:28.413485  931984 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0224 13:09:28.413516  931984 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0224 13:09:28.413566  931984 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.35 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-973775 NodeName:kubernetes-upgrade-973775 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.35"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.35 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0224 13:09:28.413794  931984 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.35
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-973775"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.35
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.35"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0224 13:09:28.413895  931984 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0224 13:09:28.426307  931984 binaries.go:44] Found k8s binaries, skipping transfer
	I0224 13:09:28.426423  931984 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0224 13:09:28.437955  931984 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0224 13:09:28.459215  931984 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0224 13:09:28.479474  931984 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0224 13:09:28.501063  931984 ssh_runner.go:195] Run: grep 192.168.50.35	control-plane.minikube.internal$ /etc/hosts
	I0224 13:09:28.505736  931984 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.35	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 13:09:28.519359  931984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 13:09:28.671850  931984 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0224 13:09:28.691278  931984 certs.go:68] Setting up /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kubernetes-upgrade-973775 for IP: 192.168.50.35
	I0224 13:09:28.691307  931984 certs.go:194] generating shared ca certs ...
	I0224 13:09:28.691332  931984 certs.go:226] acquiring lock for ca certs: {Name:mk38777c6b180f63d1816020cff79a01106ddf33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 13:09:28.691551  931984 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20451-887294/.minikube/ca.key
	I0224 13:09:28.691615  931984 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20451-887294/.minikube/proxy-client-ca.key
	I0224 13:09:28.691636  931984 certs.go:256] generating profile certs ...
	I0224 13:09:28.691746  931984 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kubernetes-upgrade-973775/client.key
	I0224 13:09:28.691784  931984 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kubernetes-upgrade-973775/client.crt with IP's: []
	I0224 13:09:29.058121  931984 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kubernetes-upgrade-973775/client.crt ...
	I0224 13:09:29.058164  931984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kubernetes-upgrade-973775/client.crt: {Name:mkbafd3882bc1f0feadc0bd866839098c77f4004 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 13:09:29.058352  931984 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kubernetes-upgrade-973775/client.key ...
	I0224 13:09:29.058366  931984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kubernetes-upgrade-973775/client.key: {Name:mk569fb23cb0d3e5f29ff763255da71bbbccd9b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 13:09:29.058444  931984 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kubernetes-upgrade-973775/apiserver.key.0e721cfe
	I0224 13:09:29.058461  931984 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kubernetes-upgrade-973775/apiserver.crt.0e721cfe with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.35]
	I0224 13:09:29.383942  931984 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kubernetes-upgrade-973775/apiserver.crt.0e721cfe ...
	I0224 13:09:29.383978  931984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kubernetes-upgrade-973775/apiserver.crt.0e721cfe: {Name:mkfef668603730c0908f45b6a4ae07fdfa0f4d9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 13:09:29.384206  931984 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kubernetes-upgrade-973775/apiserver.key.0e721cfe ...
	I0224 13:09:29.384235  931984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kubernetes-upgrade-973775/apiserver.key.0e721cfe: {Name:mk2099eb4cc31d603b91f0bb1d5cdc3162f0e1b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 13:09:29.384366  931984 certs.go:381] copying /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kubernetes-upgrade-973775/apiserver.crt.0e721cfe -> /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kubernetes-upgrade-973775/apiserver.crt
	I0224 13:09:29.384470  931984 certs.go:385] copying /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kubernetes-upgrade-973775/apiserver.key.0e721cfe -> /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kubernetes-upgrade-973775/apiserver.key
	I0224 13:09:29.384530  931984 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kubernetes-upgrade-973775/proxy-client.key
	I0224 13:09:29.384548  931984 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kubernetes-upgrade-973775/proxy-client.crt with IP's: []
	I0224 13:09:29.566370  931984 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kubernetes-upgrade-973775/proxy-client.crt ...
	I0224 13:09:29.566439  931984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kubernetes-upgrade-973775/proxy-client.crt: {Name:mk8d952d584bbd79446641ba54526c8423c71abf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 13:09:29.566709  931984 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kubernetes-upgrade-973775/proxy-client.key ...
	I0224 13:09:29.566737  931984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kubernetes-upgrade-973775/proxy-client.key: {Name:mk50493673f9ab52e872a3fc5ffb7d351f68793b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 13:09:29.567012  931984 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/894564.pem (1338 bytes)
	W0224 13:09:29.567062  931984 certs.go:480] ignoring /home/jenkins/minikube-integration/20451-887294/.minikube/certs/894564_empty.pem, impossibly tiny 0 bytes
	I0224 13:09:29.567070  931984 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca-key.pem (1679 bytes)
	I0224 13:09:29.567096  931984 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem (1082 bytes)
	I0224 13:09:29.567126  931984 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/cert.pem (1123 bytes)
	I0224 13:09:29.567155  931984 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/key.pem (1679 bytes)
	I0224 13:09:29.567202  931984 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/files/etc/ssl/certs/8945642.pem (1708 bytes)
	I0224 13:09:29.567909  931984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0224 13:09:29.607052  931984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0224 13:09:29.648133  931984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0224 13:09:29.693654  931984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0224 13:09:29.722886  931984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kubernetes-upgrade-973775/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0224 13:09:29.756685  931984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kubernetes-upgrade-973775/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0224 13:09:29.790250  931984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kubernetes-upgrade-973775/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0224 13:09:29.820728  931984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kubernetes-upgrade-973775/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0224 13:09:29.854995  931984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0224 13:09:29.888145  931984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/certs/894564.pem --> /usr/share/ca-certificates/894564.pem (1338 bytes)
	I0224 13:09:29.924561  931984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/files/etc/ssl/certs/8945642.pem --> /usr/share/ca-certificates/8945642.pem (1708 bytes)
	I0224 13:09:29.960417  931984 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0224 13:09:29.982286  931984 ssh_runner.go:195] Run: openssl version
	I0224 13:09:29.991336  931984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0224 13:09:30.009014  931984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0224 13:09:30.016299  931984 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 24 12:01 /usr/share/ca-certificates/minikubeCA.pem
	I0224 13:09:30.016378  931984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0224 13:09:30.024054  931984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0224 13:09:30.037614  931984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/894564.pem && ln -fs /usr/share/ca-certificates/894564.pem /etc/ssl/certs/894564.pem"
	I0224 13:09:30.051312  931984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/894564.pem
	I0224 13:09:30.056797  931984 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 24 12:09 /usr/share/ca-certificates/894564.pem
	I0224 13:09:30.056884  931984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/894564.pem
	I0224 13:09:30.063819  931984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/894564.pem /etc/ssl/certs/51391683.0"
	I0224 13:09:30.077265  931984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8945642.pem && ln -fs /usr/share/ca-certificates/8945642.pem /etc/ssl/certs/8945642.pem"
	I0224 13:09:30.089523  931984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8945642.pem
	I0224 13:09:30.094899  931984 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 24 12:09 /usr/share/ca-certificates/8945642.pem
	I0224 13:09:30.094964  931984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8945642.pem
	I0224 13:09:30.102016  931984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8945642.pem /etc/ssl/certs/3ec20f2e.0"
	I0224 13:09:30.118353  931984 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0224 13:09:30.124582  931984 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0224 13:09:30.124680  931984 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-973775 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kuberne
tes-upgrade-973775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.35 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0224 13:09:30.124775  931984 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0224 13:09:30.124845  931984 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0224 13:09:30.173742  931984 cri.go:89] found id: ""
	I0224 13:09:30.173832  931984 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0224 13:09:30.188644  931984 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0224 13:09:30.203078  931984 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0224 13:09:30.214888  931984 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0224 13:09:30.214923  931984 kubeadm.go:157] found existing configuration files:
	
	I0224 13:09:30.214986  931984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0224 13:09:30.227262  931984 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0224 13:09:30.227343  931984 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0224 13:09:30.239854  931984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0224 13:09:30.251065  931984 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0224 13:09:30.251131  931984 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0224 13:09:30.262114  931984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0224 13:09:30.275952  931984 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0224 13:09:30.276018  931984 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0224 13:09:30.288306  931984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0224 13:09:30.302530  931984 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0224 13:09:30.302629  931984 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0224 13:09:30.314400  931984 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0224 13:09:30.498792  931984 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0224 13:09:30.498890  931984 kubeadm.go:310] [preflight] Running pre-flight checks
	I0224 13:09:30.691710  931984 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0224 13:09:30.691886  931984 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0224 13:09:30.692023  931984 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0224 13:09:30.941018  931984 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0224 13:09:30.943378  931984 out.go:235]   - Generating certificates and keys ...
	I0224 13:09:30.943506  931984 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0224 13:09:30.943607  931984 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0224 13:09:31.262614  931984 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0224 13:09:31.425474  931984 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0224 13:09:31.574305  931984 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0224 13:09:31.984711  931984 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0224 13:09:32.123561  931984 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0224 13:09:32.123739  931984 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-973775 localhost] and IPs [192.168.50.35 127.0.0.1 ::1]
	I0224 13:09:32.247013  931984 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0224 13:09:32.247802  931984 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-973775 localhost] and IPs [192.168.50.35 127.0.0.1 ::1]
	I0224 13:09:32.378356  931984 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0224 13:09:32.601242  931984 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0224 13:09:32.950393  931984 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0224 13:09:32.950497  931984 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0224 13:09:33.230062  931984 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0224 13:09:33.399110  931984 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0224 13:09:33.479777  931984 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0224 13:09:33.634388  931984 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0224 13:09:33.661803  931984 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0224 13:09:33.662875  931984 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0224 13:09:33.662951  931984 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0224 13:09:33.845221  931984 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0224 13:09:33.847445  931984 out.go:235]   - Booting up control plane ...
	I0224 13:09:33.847609  931984 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0224 13:09:33.864051  931984 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0224 13:09:33.865356  931984 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0224 13:09:33.866348  931984 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0224 13:09:33.872006  931984 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0224 13:10:13.824721  931984 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0224 13:10:13.825938  931984 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:10:13.826176  931984 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:10:18.825618  931984 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:10:18.825826  931984 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:10:28.825012  931984 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:10:28.825264  931984 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:10:48.825834  931984 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:10:48.826048  931984 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:11:28.827180  931984 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:11:28.827387  931984 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:11:28.827427  931984 kubeadm.go:310] 
	I0224 13:11:28.827498  931984 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0224 13:11:28.827565  931984 kubeadm.go:310] 		timed out waiting for the condition
	I0224 13:11:28.827576  931984 kubeadm.go:310] 
	I0224 13:11:28.827630  931984 kubeadm.go:310] 	This error is likely caused by:
	I0224 13:11:28.827690  931984 kubeadm.go:310] 		- The kubelet is not running
	I0224 13:11:28.827840  931984 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0224 13:11:28.827863  931984 kubeadm.go:310] 
	I0224 13:11:28.827989  931984 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0224 13:11:28.828039  931984 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0224 13:11:28.828088  931984 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0224 13:11:28.828098  931984 kubeadm.go:310] 
	I0224 13:11:28.828226  931984 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0224 13:11:28.828356  931984 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0224 13:11:28.828368  931984 kubeadm.go:310] 
	I0224 13:11:28.828510  931984 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0224 13:11:28.828638  931984 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0224 13:11:28.828736  931984 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0224 13:11:28.828833  931984 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0224 13:11:28.828846  931984 kubeadm.go:310] 
	I0224 13:11:28.829089  931984 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0224 13:11:28.829176  931984 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0224 13:11:28.829279  931984 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0224 13:11:28.829449  931984 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-973775 localhost] and IPs [192.168.50.35 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-973775 localhost] and IPs [192.168.50.35 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-973775 localhost] and IPs [192.168.50.35 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-973775 localhost] and IPs [192.168.50.35 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0224 13:11:28.829503  931984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0224 13:11:30.213192  931984 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.383649682s)
	I0224 13:11:30.213301  931984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 13:11:30.228463  931984 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0224 13:11:30.239354  931984 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0224 13:11:30.239379  931984 kubeadm.go:157] found existing configuration files:
	
	I0224 13:11:30.239432  931984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0224 13:11:30.249417  931984 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0224 13:11:30.249500  931984 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0224 13:11:30.260763  931984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0224 13:11:30.270898  931984 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0224 13:11:30.270960  931984 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0224 13:11:30.281501  931984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0224 13:11:30.291285  931984 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0224 13:11:30.291358  931984 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0224 13:11:30.301991  931984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0224 13:11:30.312699  931984 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0224 13:11:30.312766  931984 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0224 13:11:30.327575  931984 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0224 13:11:30.415382  931984 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0224 13:11:30.415497  931984 kubeadm.go:310] [preflight] Running pre-flight checks
	I0224 13:11:30.581940  931984 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0224 13:11:30.582075  931984 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0224 13:11:30.582194  931984 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0224 13:11:30.781269  931984 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0224 13:11:30.783173  931984 out.go:235]   - Generating certificates and keys ...
	I0224 13:11:30.783275  931984 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0224 13:11:30.783374  931984 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0224 13:11:30.783499  931984 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0224 13:11:30.783615  931984 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0224 13:11:30.783747  931984 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0224 13:11:30.783853  931984 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0224 13:11:30.783951  931984 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0224 13:11:30.784046  931984 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0224 13:11:30.784156  931984 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0224 13:11:30.784272  931984 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0224 13:11:30.784328  931984 kubeadm.go:310] [certs] Using the existing "sa" key
	I0224 13:11:30.784427  931984 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0224 13:11:31.102935  931984 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0224 13:11:31.176813  931984 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0224 13:11:31.299311  931984 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0224 13:11:31.421663  931984 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0224 13:11:31.438437  931984 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0224 13:11:31.440261  931984 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0224 13:11:31.440366  931984 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0224 13:11:31.598512  931984 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0224 13:11:31.600540  931984 out.go:235]   - Booting up control plane ...
	I0224 13:11:31.600677  931984 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0224 13:11:31.607409  931984 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0224 13:11:31.611541  931984 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0224 13:11:31.612353  931984 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0224 13:11:31.615457  931984 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0224 13:12:11.616376  931984 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0224 13:12:11.616552  931984 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:12:11.616851  931984 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:12:16.617314  931984 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:12:16.617651  931984 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:12:26.618085  931984 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:12:26.618378  931984 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:12:46.619368  931984 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:12:46.619617  931984 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:13:26.621470  931984 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:13:26.621780  931984 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:13:26.621821  931984 kubeadm.go:310] 
	I0224 13:13:26.621885  931984 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0224 13:13:26.621944  931984 kubeadm.go:310] 		timed out waiting for the condition
	I0224 13:13:26.621965  931984 kubeadm.go:310] 
	I0224 13:13:26.622011  931984 kubeadm.go:310] 	This error is likely caused by:
	I0224 13:13:26.622056  931984 kubeadm.go:310] 		- The kubelet is not running
	I0224 13:13:26.622190  931984 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0224 13:13:26.622203  931984 kubeadm.go:310] 
	I0224 13:13:26.622332  931984 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0224 13:13:26.622379  931984 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0224 13:13:26.622423  931984 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0224 13:13:26.622432  931984 kubeadm.go:310] 
	I0224 13:13:26.622570  931984 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0224 13:13:26.622676  931984 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0224 13:13:26.622688  931984 kubeadm.go:310] 
	I0224 13:13:26.622829  931984 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0224 13:13:26.622942  931984 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0224 13:13:26.623039  931984 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0224 13:13:26.623131  931984 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0224 13:13:26.623142  931984 kubeadm.go:310] 
	I0224 13:13:26.624097  931984 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0224 13:13:26.624225  931984 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0224 13:13:26.624419  931984 kubeadm.go:394] duration metric: took 3m56.499743467s to StartCluster
	I0224 13:13:26.624490  931984 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:13:26.624559  931984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:13:26.624626  931984 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0224 13:13:26.678692  931984 cri.go:89] found id: ""
	I0224 13:13:26.678723  931984 logs.go:282] 0 containers: []
	W0224 13:13:26.678734  931984 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:13:26.678741  931984 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:13:26.678814  931984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:13:26.717514  931984 cri.go:89] found id: ""
	I0224 13:13:26.717545  931984 logs.go:282] 0 containers: []
	W0224 13:13:26.717557  931984 logs.go:284] No container was found matching "etcd"
	I0224 13:13:26.717571  931984 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:13:26.717639  931984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:13:26.757661  931984 cri.go:89] found id: ""
	I0224 13:13:26.757692  931984 logs.go:282] 0 containers: []
	W0224 13:13:26.757702  931984 logs.go:284] No container was found matching "coredns"
	I0224 13:13:26.757711  931984 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:13:26.757771  931984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:13:26.803809  931984 cri.go:89] found id: ""
	I0224 13:13:26.803842  931984 logs.go:282] 0 containers: []
	W0224 13:13:26.803853  931984 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:13:26.803862  931984 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:13:26.803916  931984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:13:26.858471  931984 cri.go:89] found id: ""
	I0224 13:13:26.858502  931984 logs.go:282] 0 containers: []
	W0224 13:13:26.858511  931984 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:13:26.858517  931984 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:13:26.858569  931984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:13:26.899301  931984 cri.go:89] found id: ""
	I0224 13:13:26.899332  931984 logs.go:282] 0 containers: []
	W0224 13:13:26.899344  931984 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:13:26.899353  931984 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:13:26.899411  931984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:13:26.942745  931984 cri.go:89] found id: ""
	I0224 13:13:26.942779  931984 logs.go:282] 0 containers: []
	W0224 13:13:26.942791  931984 logs.go:284] No container was found matching "kindnet"
	I0224 13:13:26.942812  931984 logs.go:123] Gathering logs for dmesg ...
	I0224 13:13:26.942830  931984 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:13:26.960696  931984 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:13:26.960728  931984 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:13:27.123407  931984 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:13:27.123431  931984 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:13:27.123448  931984 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:13:27.253971  931984 logs.go:123] Gathering logs for container status ...
	I0224 13:13:27.254023  931984 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:13:27.304093  931984 logs.go:123] Gathering logs for kubelet ...
	I0224 13:13:27.304126  931984 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0224 13:13:27.363240  931984 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0224 13:13:27.363320  931984 out.go:270] * 
	* 
	W0224 13:13:27.363394  931984 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0224 13:13:27.363413  931984 out.go:270] * 
	* 
	W0224 13:13:27.364809  931984 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0224 13:13:27.560245  931984 out.go:201] 
	W0224 13:13:27.666361  931984 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0224 13:13:27.666427  931984 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0224 13:13:27.666457  931984 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0224 13:13:27.796473  931984 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-973775 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-973775
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-973775: (1.91067052s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-973775 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-973775 status --format={{.Host}}: exit status 7 (108.70594ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-973775 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-973775 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m15.233629732s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-973775 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-973775 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-973775 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (121.611206ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-973775] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20451
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20451-887294/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-887294/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-973775
	    minikube start -p kubernetes-upgrade-973775 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9737752 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.2, by running:
	    
	    minikube start -p kubernetes-upgrade-973775 --kubernetes-version=v1.32.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-973775 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-973775 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (42.304271664s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2025-02-24 13:15:27.886033709 +0000 UTC m=+4525.354448860
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-973775 -n kubernetes-upgrade-973775
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-973775 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-973775 logs -n 25: (2.120757484s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-799329 sudo cat                              | auto-799329               | jenkins | v1.35.0 | 24 Feb 25 13:14 UTC | 24 Feb 25 13:14 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p auto-799329 sudo cat                              | auto-799329               | jenkins | v1.35.0 | 24 Feb 25 13:14 UTC | 24 Feb 25 13:14 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p auto-799329 sudo systemctl                        | auto-799329               | jenkins | v1.35.0 | 24 Feb 25 13:14 UTC |                     |
	|         | status docker --all --full                           |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-799329 sudo systemctl                        | auto-799329               | jenkins | v1.35.0 | 24 Feb 25 13:14 UTC | 24 Feb 25 13:14 UTC |
	|         | cat docker --no-pager                                |                           |         |         |                     |                     |
	| ssh     | -p auto-799329 sudo cat                              | auto-799329               | jenkins | v1.35.0 | 24 Feb 25 13:14 UTC | 24 Feb 25 13:14 UTC |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p auto-799329 sudo docker                           | auto-799329               | jenkins | v1.35.0 | 24 Feb 25 13:14 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p auto-799329 sudo systemctl                        | auto-799329               | jenkins | v1.35.0 | 24 Feb 25 13:14 UTC |                     |
	|         | status cri-docker --all --full                       |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-799329 sudo systemctl                        | auto-799329               | jenkins | v1.35.0 | 24 Feb 25 13:14 UTC | 24 Feb 25 13:14 UTC |
	|         | cat cri-docker --no-pager                            |                           |         |         |                     |                     |
	| ssh     | -p auto-799329 sudo cat                              | auto-799329               | jenkins | v1.35.0 | 24 Feb 25 13:14 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p auto-799329 sudo cat                              | auto-799329               | jenkins | v1.35.0 | 24 Feb 25 13:14 UTC | 24 Feb 25 13:14 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p auto-799329 sudo                                  | auto-799329               | jenkins | v1.35.0 | 24 Feb 25 13:14 UTC | 24 Feb 25 13:14 UTC |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p auto-799329 sudo systemctl                        | auto-799329               | jenkins | v1.35.0 | 24 Feb 25 13:14 UTC |                     |
	|         | status containerd --all --full                       |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-799329 sudo systemctl                        | auto-799329               | jenkins | v1.35.0 | 24 Feb 25 13:14 UTC | 24 Feb 25 13:14 UTC |
	|         | cat containerd --no-pager                            |                           |         |         |                     |                     |
	| ssh     | -p auto-799329 sudo cat                              | auto-799329               | jenkins | v1.35.0 | 24 Feb 25 13:14 UTC | 24 Feb 25 13:14 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p auto-799329 sudo cat                              | auto-799329               | jenkins | v1.35.0 | 24 Feb 25 13:14 UTC | 24 Feb 25 13:14 UTC |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p auto-799329 sudo containerd                       | auto-799329               | jenkins | v1.35.0 | 24 Feb 25 13:14 UTC | 24 Feb 25 13:14 UTC |
	|         | config dump                                          |                           |         |         |                     |                     |
	| ssh     | -p auto-799329 sudo systemctl                        | auto-799329               | jenkins | v1.35.0 | 24 Feb 25 13:14 UTC | 24 Feb 25 13:14 UTC |
	|         | status crio --all --full                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-799329 sudo systemctl                        | auto-799329               | jenkins | v1.35.0 | 24 Feb 25 13:14 UTC | 24 Feb 25 13:14 UTC |
	|         | cat crio --no-pager                                  |                           |         |         |                     |                     |
	| ssh     | -p auto-799329 sudo find                             | auto-799329               | jenkins | v1.35.0 | 24 Feb 25 13:14 UTC | 24 Feb 25 13:14 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p auto-799329 sudo crio                             | auto-799329               | jenkins | v1.35.0 | 24 Feb 25 13:14 UTC | 24 Feb 25 13:14 UTC |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p auto-799329                                       | auto-799329               | jenkins | v1.35.0 | 24 Feb 25 13:14 UTC | 24 Feb 25 13:14 UTC |
	| start   | -p calico-799329 --memory=3072                       | calico-799329             | jenkins | v1.35.0 | 24 Feb 25 13:14 UTC |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --cni=calico --driver=kvm2                           |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-973775                         | kubernetes-upgrade-973775 | jenkins | v1.35.0 | 24 Feb 25 13:14 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-973775                         | kubernetes-upgrade-973775 | jenkins | v1.35.0 | 24 Feb 25 13:14 UTC | 24 Feb 25 13:15 UTC |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p cert-expiration-993480                            | cert-expiration-993480    | jenkins | v1.35.0 | 24 Feb 25 13:15 UTC |                     |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                              |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/24 13:15:22
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0224 13:15:22.821127  939272 out.go:345] Setting OutFile to fd 1 ...
	I0224 13:15:22.821445  939272 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 13:15:22.821451  939272 out.go:358] Setting ErrFile to fd 2...
	I0224 13:15:22.821457  939272 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 13:15:22.821788  939272 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-887294/.minikube/bin
	I0224 13:15:22.822551  939272 out.go:352] Setting JSON to false
	I0224 13:15:22.823919  939272 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10664,"bootTime":1740392259,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0224 13:15:22.823997  939272 start.go:139] virtualization: kvm guest
	I0224 13:15:22.826491  939272 out.go:177] * [cert-expiration-993480] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0224 13:15:22.828545  939272 out.go:177]   - MINIKUBE_LOCATION=20451
	I0224 13:15:22.828555  939272 notify.go:220] Checking for updates...
	I0224 13:15:22.831700  939272 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 13:15:22.833282  939272 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20451-887294/kubeconfig
	I0224 13:15:22.835474  939272 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-887294/.minikube
	I0224 13:15:22.837485  939272 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0224 13:15:22.839131  939272 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0224 13:15:22.841544  939272 config.go:182] Loaded profile config "cert-expiration-993480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0224 13:15:22.842059  939272 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:15:22.842118  939272 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:15:22.861034  939272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41897
	I0224 13:15:22.861786  939272 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:15:22.862526  939272 main.go:141] libmachine: Using API Version  1
	I0224 13:15:22.862542  939272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:15:22.863018  939272 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:15:22.863263  939272 main.go:141] libmachine: (cert-expiration-993480) Calling .DriverName
	I0224 13:15:22.863546  939272 driver.go:394] Setting default libvirt URI to qemu:///system
	I0224 13:15:22.863991  939272 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:15:22.864034  939272 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:15:22.879634  939272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38497
	I0224 13:15:22.880165  939272 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:15:22.880790  939272 main.go:141] libmachine: Using API Version  1
	I0224 13:15:22.880812  939272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:15:22.881387  939272 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:15:22.881678  939272 main.go:141] libmachine: (cert-expiration-993480) Calling .DriverName
	I0224 13:15:22.928962  939272 out.go:177] * Using the kvm2 driver based on existing profile
	I0224 13:15:22.930328  939272 start.go:297] selected driver: kvm2
	I0224 13:15:22.930338  939272 start.go:901] validating driver "kvm2" against &{Name:cert-expiration-993480 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 Clus
terName:cert-expiration-993480 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.171 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0224 13:15:22.930535  939272 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0224 13:15:22.931622  939272 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 13:15:22.931709  939272 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20451-887294/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0224 13:15:22.950139  939272 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0224 13:15:22.950546  939272 cni.go:84] Creating CNI manager for ""
	I0224 13:15:22.950588  939272 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0224 13:15:22.950644  939272 start.go:340] cluster config:
	{Name:cert-expiration-993480 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:cert-expiration-993480 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.171 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0224 13:15:22.950738  939272 iso.go:125] acquiring lock: {Name:mk57408cca66a96a13d93cda43cdfac6e61aef3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 13:15:22.953386  939272 out.go:177] * Starting "cert-expiration-993480" primary control-plane node in "cert-expiration-993480" cluster
	I0224 13:15:22.954703  939272 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0224 13:15:22.954776  939272 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20451-887294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0224 13:15:22.954785  939272 cache.go:56] Caching tarball of preloaded images
	I0224 13:15:22.955003  939272 preload.go:172] Found /home/jenkins/minikube-integration/20451-887294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0224 13:15:22.955017  939272 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0224 13:15:22.955187  939272 profile.go:143] Saving config to /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/cert-expiration-993480/config.json ...
	I0224 13:15:22.955494  939272 start.go:360] acquireMachinesLock for cert-expiration-993480: {Name:mk023761b01bb629a1acd40bc8104cc517b0e15b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0224 13:15:22.955562  939272 start.go:364] duration metric: took 44.512µs to acquireMachinesLock for "cert-expiration-993480"
	I0224 13:15:22.955600  939272 start.go:96] Skipping create...Using existing machine configuration
	I0224 13:15:22.955605  939272 fix.go:54] fixHost starting: 
	I0224 13:15:22.956040  939272 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:15:22.956086  939272 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:15:22.974067  939272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43461
	I0224 13:15:22.974573  939272 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:15:22.975146  939272 main.go:141] libmachine: Using API Version  1
	I0224 13:15:22.975159  939272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:15:22.975497  939272 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:15:22.975742  939272 main.go:141] libmachine: (cert-expiration-993480) Calling .DriverName
	I0224 13:15:22.976231  939272 main.go:141] libmachine: (cert-expiration-993480) Calling .GetState
	I0224 13:15:22.978203  939272 fix.go:112] recreateIfNeeded on cert-expiration-993480: state=Running err=<nil>
	W0224 13:15:22.978223  939272 fix.go:138] unexpected machine state, will restart: <nil>
	I0224 13:15:22.980252  939272 out.go:177] * Updating the running kvm2 "cert-expiration-993480" VM ...
	I0224 13:15:21.921380  938679 out.go:235]   - Configuring RBAC rules ...
	I0224 13:15:21.921568  938679 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0224 13:15:21.935355  938679 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0224 13:15:21.967342  938679 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0224 13:15:21.978286  938679 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0224 13:15:21.984608  938679 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0224 13:15:21.989705  938679 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0224 13:15:22.162105  938679 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0224 13:15:22.632712  938679 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0224 13:15:23.163042  938679 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0224 13:15:23.165388  938679 kubeadm.go:310] 
	I0224 13:15:23.165518  938679 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0224 13:15:23.165532  938679 kubeadm.go:310] 
	I0224 13:15:23.165630  938679 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0224 13:15:23.165641  938679 kubeadm.go:310] 
	I0224 13:15:23.165676  938679 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0224 13:15:23.165762  938679 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0224 13:15:23.165834  938679 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0224 13:15:23.165844  938679 kubeadm.go:310] 
	I0224 13:15:23.165916  938679 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0224 13:15:23.165927  938679 kubeadm.go:310] 
	I0224 13:15:23.165986  938679 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0224 13:15:23.165996  938679 kubeadm.go:310] 
	I0224 13:15:23.166064  938679 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0224 13:15:23.166138  938679 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0224 13:15:23.166196  938679 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0224 13:15:23.166203  938679 kubeadm.go:310] 
	I0224 13:15:23.166272  938679 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0224 13:15:23.166336  938679 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0224 13:15:23.166344  938679 kubeadm.go:310] 
	I0224 13:15:23.166442  938679 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token g1t1w8.bd9rtwyazkg68qx6 \
	I0224 13:15:23.166574  938679 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:25cdff1b144f9bdda2a397f8df58979800593c9a9a7e9fabc93239253c272d6f \
	I0224 13:15:23.166614  938679 kubeadm.go:310] 	--control-plane 
	I0224 13:15:23.166620  938679 kubeadm.go:310] 
	I0224 13:15:23.166715  938679 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0224 13:15:23.166723  938679 kubeadm.go:310] 
	I0224 13:15:23.166837  938679 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token g1t1w8.bd9rtwyazkg68qx6 \
	I0224 13:15:23.166978  938679 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:25cdff1b144f9bdda2a397f8df58979800593c9a9a7e9fabc93239253c272d6f 
	I0224 13:15:23.167139  938679 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0224 13:15:23.167168  938679 cni.go:84] Creating CNI manager for "calico"
	I0224 13:15:23.169157  938679 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0224 13:15:19.912875  937419 node_ready.go:49] node "kindnet-799329" has status "Ready":"True"
	I0224 13:15:19.912927  937419 node_ready.go:38] duration metric: took 15.008986822s for node "kindnet-799329" to be "Ready" ...
	I0224 13:15:19.912942  937419 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 13:15:19.920998  937419 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-mj2fd" in "kube-system" namespace to be "Ready" ...
	I0224 13:15:21.928814  937419 pod_ready.go:103] pod "coredns-668d6bf9bc-mj2fd" in "kube-system" namespace has status "Ready":"False"
	I0224 13:15:22.428585  937419 pod_ready.go:93] pod "coredns-668d6bf9bc-mj2fd" in "kube-system" namespace has status "Ready":"True"
	I0224 13:15:22.428615  937419 pod_ready.go:82] duration metric: took 2.507583832s for pod "coredns-668d6bf9bc-mj2fd" in "kube-system" namespace to be "Ready" ...
	I0224 13:15:22.428629  937419 pod_ready.go:79] waiting up to 15m0s for pod "etcd-kindnet-799329" in "kube-system" namespace to be "Ready" ...
	I0224 13:15:22.438805  937419 pod_ready.go:93] pod "etcd-kindnet-799329" in "kube-system" namespace has status "Ready":"True"
	I0224 13:15:22.438831  937419 pod_ready.go:82] duration metric: took 10.193864ms for pod "etcd-kindnet-799329" in "kube-system" namespace to be "Ready" ...
	I0224 13:15:22.438848  937419 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-kindnet-799329" in "kube-system" namespace to be "Ready" ...
	I0224 13:15:22.447787  937419 pod_ready.go:93] pod "kube-apiserver-kindnet-799329" in "kube-system" namespace has status "Ready":"True"
	I0224 13:15:22.447831  937419 pod_ready.go:82] duration metric: took 8.963271ms for pod "kube-apiserver-kindnet-799329" in "kube-system" namespace to be "Ready" ...
	I0224 13:15:22.447847  937419 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-kindnet-799329" in "kube-system" namespace to be "Ready" ...
	I0224 13:15:22.455729  937419 pod_ready.go:93] pod "kube-controller-manager-kindnet-799329" in "kube-system" namespace has status "Ready":"True"
	I0224 13:15:22.455758  937419 pod_ready.go:82] duration metric: took 7.901269ms for pod "kube-controller-manager-kindnet-799329" in "kube-system" namespace to be "Ready" ...
	I0224 13:15:22.455775  937419 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-kg858" in "kube-system" namespace to be "Ready" ...
	I0224 13:15:22.466273  937419 pod_ready.go:93] pod "kube-proxy-kg858" in "kube-system" namespace has status "Ready":"True"
	I0224 13:15:22.466306  937419 pod_ready.go:82] duration metric: took 10.5234ms for pod "kube-proxy-kg858" in "kube-system" namespace to be "Ready" ...
	I0224 13:15:22.466320  937419 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-kindnet-799329" in "kube-system" namespace to be "Ready" ...
	I0224 13:15:22.824801  937419 pod_ready.go:93] pod "kube-scheduler-kindnet-799329" in "kube-system" namespace has status "Ready":"True"
	I0224 13:15:22.824830  937419 pod_ready.go:82] duration metric: took 358.501605ms for pod "kube-scheduler-kindnet-799329" in "kube-system" namespace to be "Ready" ...
	I0224 13:15:22.824847  937419 pod_ready.go:39] duration metric: took 2.911886576s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 13:15:22.824869  937419 api_server.go:52] waiting for apiserver process to appear ...
	I0224 13:15:22.824930  937419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:15:22.847739  937419 api_server.go:72] duration metric: took 18.892623704s to wait for apiserver process to appear ...
	I0224 13:15:22.847767  937419 api_server.go:88] waiting for apiserver healthz status ...
	I0224 13:15:22.847788  937419 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0224 13:15:22.859160  937419 api_server.go:279] https://192.168.39.144:8443/healthz returned 200:
	ok
	I0224 13:15:22.860622  937419 api_server.go:141] control plane version: v1.32.2
	I0224 13:15:22.860655  937419 api_server.go:131] duration metric: took 12.879028ms to wait for apiserver health ...
	I0224 13:15:22.860666  937419 system_pods.go:43] waiting for kube-system pods to appear ...
	I0224 13:15:23.029061  937419 system_pods.go:59] 8 kube-system pods found
	I0224 13:15:23.029107  937419 system_pods.go:61] "coredns-668d6bf9bc-mj2fd" [b6c75b77-fc22-452e-bf67-69941dd454a1] Running
	I0224 13:15:23.029115  937419 system_pods.go:61] "etcd-kindnet-799329" [2de0f506-f2a1-43a4-a921-5d525eb8a9fc] Running
	I0224 13:15:23.029121  937419 system_pods.go:61] "kindnet-9tvpr" [8ec0704c-c773-4b56-ab1e-bca6dae97f46] Running
	I0224 13:15:23.029126  937419 system_pods.go:61] "kube-apiserver-kindnet-799329" [4ca46b78-557b-4630-a77b-6cf298d9a05b] Running
	I0224 13:15:23.029131  937419 system_pods.go:61] "kube-controller-manager-kindnet-799329" [7d08aeb7-e967-4fd7-adb6-04a3d895a2b9] Running
	I0224 13:15:23.029136  937419 system_pods.go:61] "kube-proxy-kg858" [3897fe6a-4b86-4c7d-ad40-dc475cebaf44] Running
	I0224 13:15:23.029141  937419 system_pods.go:61] "kube-scheduler-kindnet-799329" [f89f22d3-9e18-40ef-b406-9b73d5070e65] Running
	I0224 13:15:23.029146  937419 system_pods.go:61] "storage-provisioner" [d228fa4e-8a7b-4a5e-a1df-18ed500fa384] Running
	I0224 13:15:23.029154  937419 system_pods.go:74] duration metric: took 168.480206ms to wait for pod list to return data ...
	I0224 13:15:23.029164  937419 default_sa.go:34] waiting for default service account to be created ...
	I0224 13:15:23.226490  937419 default_sa.go:45] found service account: "default"
	I0224 13:15:23.226525  937419 default_sa.go:55] duration metric: took 197.352513ms for default service account to be created ...
	I0224 13:15:23.226546  937419 system_pods.go:116] waiting for k8s-apps to be running ...
	I0224 13:15:23.426427  937419 system_pods.go:86] 8 kube-system pods found
	I0224 13:15:23.426478  937419 system_pods.go:89] "coredns-668d6bf9bc-mj2fd" [b6c75b77-fc22-452e-bf67-69941dd454a1] Running
	I0224 13:15:23.426488  937419 system_pods.go:89] "etcd-kindnet-799329" [2de0f506-f2a1-43a4-a921-5d525eb8a9fc] Running
	I0224 13:15:23.426494  937419 system_pods.go:89] "kindnet-9tvpr" [8ec0704c-c773-4b56-ab1e-bca6dae97f46] Running
	I0224 13:15:23.426502  937419 system_pods.go:89] "kube-apiserver-kindnet-799329" [4ca46b78-557b-4630-a77b-6cf298d9a05b] Running
	I0224 13:15:23.426508  937419 system_pods.go:89] "kube-controller-manager-kindnet-799329" [7d08aeb7-e967-4fd7-adb6-04a3d895a2b9] Running
	I0224 13:15:23.426513  937419 system_pods.go:89] "kube-proxy-kg858" [3897fe6a-4b86-4c7d-ad40-dc475cebaf44] Running
	I0224 13:15:23.426521  937419 system_pods.go:89] "kube-scheduler-kindnet-799329" [f89f22d3-9e18-40ef-b406-9b73d5070e65] Running
	I0224 13:15:23.426538  937419 system_pods.go:89] "storage-provisioner" [d228fa4e-8a7b-4a5e-a1df-18ed500fa384] Running
	I0224 13:15:23.426549  937419 system_pods.go:126] duration metric: took 199.995017ms to wait for k8s-apps to be running ...
	I0224 13:15:23.426559  937419 system_svc.go:44] waiting for kubelet service to be running ....
	I0224 13:15:23.426622  937419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 13:15:23.447411  937419 system_svc.go:56] duration metric: took 20.828447ms WaitForService to wait for kubelet
	I0224 13:15:23.447446  937419 kubeadm.go:582] duration metric: took 19.492337135s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0224 13:15:23.447472  937419 node_conditions.go:102] verifying NodePressure condition ...
	I0224 13:15:23.625542  937419 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0224 13:15:23.625584  937419 node_conditions.go:123] node cpu capacity is 2
	I0224 13:15:23.625603  937419 node_conditions.go:105] duration metric: took 178.125184ms to run NodePressure ...
	I0224 13:15:23.625618  937419 start.go:241] waiting for startup goroutines ...
	I0224 13:15:23.625627  937419 start.go:246] waiting for cluster config update ...
	I0224 13:15:23.625642  937419 start.go:255] writing updated cluster config ...
	I0224 13:15:23.626074  937419 ssh_runner.go:195] Run: rm -f paused
	I0224 13:15:23.699185  937419 start.go:600] kubectl: 1.32.2, cluster: 1.32.2 (minor skew: 0)
	I0224 13:15:23.701385  937419 out.go:177] * Done! kubectl is now configured to use "kindnet-799329" cluster and "default" namespace by default
	I0224 13:15:23.171166  938679 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0224 13:15:23.171193  938679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (324369 bytes)
	I0224 13:15:23.203063  938679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0224 13:15:25.148849  938679 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.945738644s)
	I0224 13:15:25.148915  938679 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0224 13:15:25.149051  938679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 13:15:25.149056  938679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-799329 minikube.k8s.io/updated_at=2025_02_24T13_15_25_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=b76650f53499dbb51707efa4a87e94b72d747650 minikube.k8s.io/name=calico-799329 minikube.k8s.io/primary=true
	I0224 13:15:21.055672  938996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:15:21.555533  938996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:15:21.574072  938996 api_server.go:72] duration metric: took 1.019440824s to wait for apiserver process to appear ...
	I0224 13:15:21.574108  938996 api_server.go:88] waiting for apiserver healthz status ...
	I0224 13:15:21.574134  938996 api_server.go:253] Checking apiserver healthz at https://192.168.50.35:8443/healthz ...
	I0224 13:15:24.279559  938996 api_server.go:279] https://192.168.50.35:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0224 13:15:24.279629  938996 api_server.go:103] status: https://192.168.50.35:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0224 13:15:24.279651  938996 api_server.go:253] Checking apiserver healthz at https://192.168.50.35:8443/healthz ...
	I0224 13:15:24.305229  938996 api_server.go:279] https://192.168.50.35:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0224 13:15:24.305264  938996 api_server.go:103] status: https://192.168.50.35:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0224 13:15:24.574615  938996 api_server.go:253] Checking apiserver healthz at https://192.168.50.35:8443/healthz ...
	I0224 13:15:24.581970  938996 api_server.go:279] https://192.168.50.35:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0224 13:15:24.582008  938996 api_server.go:103] status: https://192.168.50.35:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0224 13:15:25.074300  938996 api_server.go:253] Checking apiserver healthz at https://192.168.50.35:8443/healthz ...
	I0224 13:15:25.081089  938996 api_server.go:279] https://192.168.50.35:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0224 13:15:25.081127  938996 api_server.go:103] status: https://192.168.50.35:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0224 13:15:25.574316  938996 api_server.go:253] Checking apiserver healthz at https://192.168.50.35:8443/healthz ...
	I0224 13:15:25.590642  938996 api_server.go:279] https://192.168.50.35:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0224 13:15:25.590686  938996 api_server.go:103] status: https://192.168.50.35:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0224 13:15:26.074479  938996 api_server.go:253] Checking apiserver healthz at https://192.168.50.35:8443/healthz ...
	I0224 13:15:26.079520  938996 api_server.go:279] https://192.168.50.35:8443/healthz returned 200:
	ok
	I0224 13:15:26.088830  938996 api_server.go:141] control plane version: v1.32.2
	I0224 13:15:26.088866  938996 api_server.go:131] duration metric: took 4.514751018s to wait for apiserver health ...
	I0224 13:15:26.088876  938996 cni.go:84] Creating CNI manager for ""
	I0224 13:15:26.088883  938996 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0224 13:15:26.090553  938996 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0224 13:15:26.092051  938996 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0224 13:15:26.104617  938996 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0224 13:15:26.128988  938996 system_pods.go:43] waiting for kube-system pods to appear ...
	I0224 13:15:26.135421  938996 system_pods.go:59] 8 kube-system pods found
	I0224 13:15:26.135467  938996 system_pods.go:61] "coredns-668d6bf9bc-bt28c" [dbac42b2-2394-451f-b54f-1d9ec44ac4e1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0224 13:15:26.135477  938996 system_pods.go:61] "coredns-668d6bf9bc-kxnpn" [e2f974f0-ba78-429c-98a4-64e4c5314321] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0224 13:15:26.135485  938996 system_pods.go:61] "etcd-kubernetes-upgrade-973775" [69025b7f-8e0b-465c-8753-ed4903fcf8c5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0224 13:15:26.135493  938996 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-973775" [8f0ff711-f7b6-4385-b5da-73cd46adced7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0224 13:15:26.135502  938996 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-973775" [20939fc3-9433-4f8d-9f96-e1f0a4649acf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0224 13:15:26.135509  938996 system_pods.go:61] "kube-proxy-g7vf8" [cfc61df1-27c0-42a0-9160-97066d10ef0b] Running
	I0224 13:15:26.135517  938996 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-973775" [71d69fe1-147b-4031-9bc0-30f84525260d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0224 13:15:26.135523  938996 system_pods.go:61] "storage-provisioner" [3f6f5758-7842-400d-9bc7-d4e2d0226484] Running
	I0224 13:15:26.135533  938996 system_pods.go:74] duration metric: took 6.51481ms to wait for pod list to return data ...
	I0224 13:15:26.135548  938996 node_conditions.go:102] verifying NodePressure condition ...
	I0224 13:15:26.140409  938996 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0224 13:15:26.140459  938996 node_conditions.go:123] node cpu capacity is 2
	I0224 13:15:26.140473  938996 node_conditions.go:105] duration metric: took 4.918491ms to run NodePressure ...
	I0224 13:15:26.140497  938996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0224 13:15:26.415264  938996 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0224 13:15:26.436499  938996 ops.go:34] apiserver oom_adj: -16
	I0224 13:15:26.436535  938996 kubeadm.go:597] duration metric: took 8.211525035s to restartPrimaryControlPlane
	I0224 13:15:26.436548  938996 kubeadm.go:394] duration metric: took 8.31506061s to StartCluster
	I0224 13:15:26.436574  938996 settings.go:142] acquiring lock: {Name:mk663e441d32b04abcccdab86db3e15276e74de8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 13:15:26.436686  938996 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20451-887294/kubeconfig
	I0224 13:15:26.438045  938996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-887294/kubeconfig: {Name:mk0122b69f41cd40d5267f436266ccce22ce5ef2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 13:15:26.438289  938996 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.35 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0224 13:15:26.438520  938996 config.go:182] Loaded profile config "kubernetes-upgrade-973775": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0224 13:15:26.438432  938996 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0224 13:15:26.438620  938996 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-973775"
	I0224 13:15:26.438630  938996 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-973775"
	I0224 13:15:26.438641  938996 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-973775"
	I0224 13:15:26.438652  938996 addons.go:238] Setting addon storage-provisioner=true in "kubernetes-upgrade-973775"
	W0224 13:15:26.438662  938996 addons.go:247] addon storage-provisioner should already be in state true
	I0224 13:15:26.438703  938996 host.go:66] Checking if "kubernetes-upgrade-973775" exists ...
	I0224 13:15:26.439045  938996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:15:26.439095  938996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:15:26.439169  938996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:15:26.439216  938996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:15:26.440178  938996 out.go:177] * Verifying Kubernetes components...
	I0224 13:15:26.442385  938996 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 13:15:26.456416  938996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37031
	I0224 13:15:26.456441  938996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46209
	I0224 13:15:26.456949  938996 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:15:26.456994  938996 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:15:26.457553  938996 main.go:141] libmachine: Using API Version  1
	I0224 13:15:26.457578  938996 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:15:26.457703  938996 main.go:141] libmachine: Using API Version  1
	I0224 13:15:26.457725  938996 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:15:26.457922  938996 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:15:26.458034  938996 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:15:26.458289  938996 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetState
	I0224 13:15:26.458480  938996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:15:26.458528  938996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:15:26.461429  938996 kapi.go:59] client config for kubernetes-upgrade-973775: &rest.Config{Host:"https://192.168.50.35:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kubernetes-upgrade-973775/client.crt", KeyFile:"/home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kubernetes-upgrade-973775/client.key", CAFile:"/home/jenkins/minikube-integration/20451-887294/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24da640), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0224 13:15:26.461813  938996 addons.go:238] Setting addon default-storageclass=true in "kubernetes-upgrade-973775"
	W0224 13:15:26.461835  938996 addons.go:247] addon default-storageclass should already be in state true
	I0224 13:15:26.461868  938996 host.go:66] Checking if "kubernetes-upgrade-973775" exists ...
	I0224 13:15:26.462281  938996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:15:26.462337  938996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:15:26.475985  938996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45691
	I0224 13:15:26.476635  938996 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:15:26.477261  938996 main.go:141] libmachine: Using API Version  1
	I0224 13:15:26.477287  938996 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:15:26.477751  938996 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:15:26.477985  938996 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetState
	I0224 13:15:26.479364  938996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35009
	I0224 13:15:26.480202  938996 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:15:26.480344  938996 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .DriverName
	I0224 13:15:26.480897  938996 main.go:141] libmachine: Using API Version  1
	I0224 13:15:26.480921  938996 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:15:26.481387  938996 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:15:26.482111  938996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:15:26.482171  938996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:15:26.483239  938996 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0224 13:15:25.324356  938679 ops.go:34] apiserver oom_adj: -16
	I0224 13:15:25.324414  938679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 13:15:25.824842  938679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 13:15:26.324760  938679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 13:15:26.824830  938679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 13:15:27.325367  938679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 13:15:27.457062  938679 kubeadm.go:1113] duration metric: took 2.308086437s to wait for elevateKubeSystemPrivileges
	I0224 13:15:27.457095  938679 kubeadm.go:394] duration metric: took 14.558974207s to StartCluster
	I0224 13:15:27.457119  938679 settings.go:142] acquiring lock: {Name:mk663e441d32b04abcccdab86db3e15276e74de8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 13:15:27.457188  938679 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20451-887294/kubeconfig
	I0224 13:15:27.458928  938679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-887294/kubeconfig: {Name:mk0122b69f41cd40d5267f436266ccce22ce5ef2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 13:15:27.459258  938679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0224 13:15:27.459262  938679 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.61 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0224 13:15:27.459297  938679 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0224 13:15:27.459436  938679 addons.go:69] Setting storage-provisioner=true in profile "calico-799329"
	I0224 13:15:27.459459  938679 addons.go:238] Setting addon storage-provisioner=true in "calico-799329"
	I0224 13:15:27.459464  938679 addons.go:69] Setting default-storageclass=true in profile "calico-799329"
	I0224 13:15:27.459474  938679 config.go:182] Loaded profile config "calico-799329": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0224 13:15:27.459489  938679 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-799329"
	I0224 13:15:27.459500  938679 host.go:66] Checking if "calico-799329" exists ...
	I0224 13:15:27.459898  938679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:15:27.459944  938679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:15:27.459955  938679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:15:27.459995  938679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:15:27.461120  938679 out.go:177] * Verifying Kubernetes components...
	I0224 13:15:27.462812  938679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 13:15:27.481724  938679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34153
	I0224 13:15:27.482479  938679 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:15:27.483164  938679 main.go:141] libmachine: Using API Version  1
	I0224 13:15:27.483186  938679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:15:27.483604  938679 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:15:27.484300  938679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:15:27.484351  938679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:15:27.486976  938679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43937
	I0224 13:15:27.487502  938679 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:15:27.488113  938679 main.go:141] libmachine: Using API Version  1
	I0224 13:15:27.488142  938679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:15:27.488581  938679 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:15:27.488848  938679 main.go:141] libmachine: (calico-799329) Calling .GetState
	I0224 13:15:27.493637  938679 addons.go:238] Setting addon default-storageclass=true in "calico-799329"
	I0224 13:15:27.493698  938679 host.go:66] Checking if "calico-799329" exists ...
	I0224 13:15:27.494111  938679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:15:27.494184  938679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:15:27.511638  938679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40477
	I0224 13:15:27.512317  938679 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:15:27.512994  938679 main.go:141] libmachine: Using API Version  1
	I0224 13:15:27.513025  938679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:15:27.513665  938679 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:15:27.513910  938679 main.go:141] libmachine: (calico-799329) Calling .GetState
	I0224 13:15:27.516241  938679 main.go:141] libmachine: (calico-799329) Calling .DriverName
	I0224 13:15:27.517043  938679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45117
	I0224 13:15:27.517672  938679 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:15:27.517913  938679 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0224 13:15:26.484946  938996 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0224 13:15:26.484971  938996 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0224 13:15:26.484998  938996 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHHostname
	I0224 13:15:26.488526  938996 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:15:26.488845  938996 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:44:47", ip: ""} in network mk-kubernetes-upgrade-973775: {Iface:virbr2 ExpiryTime:2025-02-24 14:13:59 +0000 UTC Type:0 Mac:52:54:00:d8:44:47 Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:kubernetes-upgrade-973775 Clientid:01:52:54:00:d8:44:47}
	I0224 13:15:26.488874  938996 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined IP address 192.168.50.35 and MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:15:26.489077  938996 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHPort
	I0224 13:15:26.489334  938996 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHKeyPath
	I0224 13:15:26.489495  938996 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHUsername
	I0224 13:15:26.489677  938996 sshutil.go:53] new ssh client: &{IP:192.168.50.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/kubernetes-upgrade-973775/id_rsa Username:docker}
	I0224 13:15:26.502869  938996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44669
	I0224 13:15:26.503552  938996 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:15:26.504168  938996 main.go:141] libmachine: Using API Version  1
	I0224 13:15:26.504198  938996 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:15:26.504565  938996 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:15:26.504781  938996 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetState
	I0224 13:15:26.506567  938996 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .DriverName
	I0224 13:15:26.506801  938996 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0224 13:15:26.506820  938996 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0224 13:15:26.506842  938996 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHHostname
	I0224 13:15:26.509128  938996 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:15:26.509854  938996 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:44:47", ip: ""} in network mk-kubernetes-upgrade-973775: {Iface:virbr2 ExpiryTime:2025-02-24 14:13:59 +0000 UTC Type:0 Mac:52:54:00:d8:44:47 Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:kubernetes-upgrade-973775 Clientid:01:52:54:00:d8:44:47}
	I0224 13:15:26.509889  938996 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | domain kubernetes-upgrade-973775 has defined IP address 192.168.50.35 and MAC address 52:54:00:d8:44:47 in network mk-kubernetes-upgrade-973775
	I0224 13:15:26.510124  938996 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHPort
	I0224 13:15:26.510322  938996 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHKeyPath
	I0224 13:15:26.510519  938996 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .GetSSHUsername
	I0224 13:15:26.510664  938996 sshutil.go:53] new ssh client: &{IP:192.168.50.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/kubernetes-upgrade-973775/id_rsa Username:docker}
	I0224 13:15:26.661299  938996 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0224 13:15:26.685756  938996 api_server.go:52] waiting for apiserver process to appear ...
	I0224 13:15:26.685845  938996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:15:26.706652  938996 api_server.go:72] duration metric: took 268.313468ms to wait for apiserver process to appear ...
	I0224 13:15:26.706687  938996 api_server.go:88] waiting for apiserver healthz status ...
	I0224 13:15:26.706714  938996 api_server.go:253] Checking apiserver healthz at https://192.168.50.35:8443/healthz ...
	I0224 13:15:26.713190  938996 api_server.go:279] https://192.168.50.35:8443/healthz returned 200:
	ok
	I0224 13:15:26.714333  938996 api_server.go:141] control plane version: v1.32.2
	I0224 13:15:26.714358  938996 api_server.go:131] duration metric: took 7.662055ms to wait for apiserver health ...
	I0224 13:15:26.714366  938996 system_pods.go:43] waiting for kube-system pods to appear ...
	I0224 13:15:26.720077  938996 system_pods.go:59] 8 kube-system pods found
	I0224 13:15:26.720113  938996 system_pods.go:61] "coredns-668d6bf9bc-bt28c" [dbac42b2-2394-451f-b54f-1d9ec44ac4e1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0224 13:15:26.720122  938996 system_pods.go:61] "coredns-668d6bf9bc-kxnpn" [e2f974f0-ba78-429c-98a4-64e4c5314321] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0224 13:15:26.720136  938996 system_pods.go:61] "etcd-kubernetes-upgrade-973775" [69025b7f-8e0b-465c-8753-ed4903fcf8c5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0224 13:15:26.720146  938996 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-973775" [8f0ff711-f7b6-4385-b5da-73cd46adced7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0224 13:15:26.720155  938996 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-973775" [20939fc3-9433-4f8d-9f96-e1f0a4649acf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0224 13:15:26.720161  938996 system_pods.go:61] "kube-proxy-g7vf8" [cfc61df1-27c0-42a0-9160-97066d10ef0b] Running
	I0224 13:15:26.720175  938996 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-973775" [71d69fe1-147b-4031-9bc0-30f84525260d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0224 13:15:26.720184  938996 system_pods.go:61] "storage-provisioner" [3f6f5758-7842-400d-9bc7-d4e2d0226484] Running
	I0224 13:15:26.720193  938996 system_pods.go:74] duration metric: took 5.820503ms to wait for pod list to return data ...
	I0224 13:15:26.720209  938996 kubeadm.go:582] duration metric: took 281.882417ms to wait for: map[apiserver:true system_pods:true]
	I0224 13:15:26.720232  938996 node_conditions.go:102] verifying NodePressure condition ...
	I0224 13:15:26.724165  938996 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0224 13:15:26.724194  938996 node_conditions.go:123] node cpu capacity is 2
	I0224 13:15:26.724207  938996 node_conditions.go:105] duration metric: took 3.969744ms to run NodePressure ...
	I0224 13:15:26.724223  938996 start.go:241] waiting for startup goroutines ...
	I0224 13:15:26.809067  938996 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0224 13:15:26.827720  938996 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0224 13:15:27.791479  938996 main.go:141] libmachine: Making call to close driver server
	I0224 13:15:27.791518  938996 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .Close
	I0224 13:15:27.791568  938996 main.go:141] libmachine: Making call to close driver server
	I0224 13:15:27.791607  938996 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .Close
	I0224 13:15:27.791885  938996 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | Closing plugin on server side
	I0224 13:15:27.791895  938996 main.go:141] libmachine: Successfully made call to close driver server
	I0224 13:15:27.791916  938996 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 13:15:27.791920  938996 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | Closing plugin on server side
	I0224 13:15:27.791923  938996 main.go:141] libmachine: Successfully made call to close driver server
	I0224 13:15:27.791941  938996 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 13:15:27.791950  938996 main.go:141] libmachine: Making call to close driver server
	I0224 13:15:27.791959  938996 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .Close
	I0224 13:15:27.791929  938996 main.go:141] libmachine: Making call to close driver server
	I0224 13:15:27.792021  938996 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .Close
	I0224 13:15:27.792280  938996 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | Closing plugin on server side
	I0224 13:15:27.792314  938996 main.go:141] libmachine: Successfully made call to close driver server
	I0224 13:15:27.792322  938996 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 13:15:27.793798  938996 main.go:141] libmachine: Successfully made call to close driver server
	I0224 13:15:27.793812  938996 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 13:15:27.793796  938996 main.go:141] libmachine: (kubernetes-upgrade-973775) DBG | Closing plugin on server side
	I0224 13:15:27.800277  938996 main.go:141] libmachine: Making call to close driver server
	I0224 13:15:27.800310  938996 main.go:141] libmachine: (kubernetes-upgrade-973775) Calling .Close
	I0224 13:15:27.800620  938996 main.go:141] libmachine: Successfully made call to close driver server
	I0224 13:15:27.800635  938996 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 13:15:27.802731  938996 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0224 13:15:22.981764  939272 machine.go:93] provisionDockerMachine start ...
	I0224 13:15:22.981798  939272 main.go:141] libmachine: (cert-expiration-993480) Calling .DriverName
	I0224 13:15:22.982104  939272 main.go:141] libmachine: (cert-expiration-993480) Calling .GetSSHHostname
	I0224 13:15:22.985121  939272 main.go:141] libmachine: (cert-expiration-993480) DBG | domain cert-expiration-993480 has defined MAC address 52:54:00:4c:e9:18 in network mk-cert-expiration-993480
	I0224 13:15:22.985719  939272 main.go:141] libmachine: (cert-expiration-993480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e9:18", ip: ""} in network mk-cert-expiration-993480: {Iface:virbr1 ExpiryTime:2025-02-24 14:11:54 +0000 UTC Type:0 Mac:52:54:00:4c:e9:18 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:cert-expiration-993480 Clientid:01:52:54:00:4c:e9:18}
	I0224 13:15:22.985743  939272 main.go:141] libmachine: (cert-expiration-993480) DBG | domain cert-expiration-993480 has defined IP address 192.168.61.171 and MAC address 52:54:00:4c:e9:18 in network mk-cert-expiration-993480
	I0224 13:15:22.986109  939272 main.go:141] libmachine: (cert-expiration-993480) Calling .GetSSHPort
	I0224 13:15:22.986355  939272 main.go:141] libmachine: (cert-expiration-993480) Calling .GetSSHKeyPath
	I0224 13:15:22.986507  939272 main.go:141] libmachine: (cert-expiration-993480) Calling .GetSSHKeyPath
	I0224 13:15:22.986775  939272 main.go:141] libmachine: (cert-expiration-993480) Calling .GetSSHUsername
	I0224 13:15:22.987005  939272 main.go:141] libmachine: Using SSH client type: native
	I0224 13:15:22.987265  939272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.61.171 22 <nil> <nil>}
	I0224 13:15:22.987280  939272 main.go:141] libmachine: About to run SSH command:
	hostname
	I0224 13:15:23.113171  939272 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-993480
	
	I0224 13:15:23.113195  939272 main.go:141] libmachine: (cert-expiration-993480) Calling .GetMachineName
	I0224 13:15:23.113508  939272 buildroot.go:166] provisioning hostname "cert-expiration-993480"
	I0224 13:15:23.113535  939272 main.go:141] libmachine: (cert-expiration-993480) Calling .GetMachineName
	I0224 13:15:23.113756  939272 main.go:141] libmachine: (cert-expiration-993480) Calling .GetSSHHostname
	I0224 13:15:23.116817  939272 main.go:141] libmachine: (cert-expiration-993480) DBG | domain cert-expiration-993480 has defined MAC address 52:54:00:4c:e9:18 in network mk-cert-expiration-993480
	I0224 13:15:23.117283  939272 main.go:141] libmachine: (cert-expiration-993480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e9:18", ip: ""} in network mk-cert-expiration-993480: {Iface:virbr1 ExpiryTime:2025-02-24 14:11:54 +0000 UTC Type:0 Mac:52:54:00:4c:e9:18 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:cert-expiration-993480 Clientid:01:52:54:00:4c:e9:18}
	I0224 13:15:23.117327  939272 main.go:141] libmachine: (cert-expiration-993480) DBG | domain cert-expiration-993480 has defined IP address 192.168.61.171 and MAC address 52:54:00:4c:e9:18 in network mk-cert-expiration-993480
	I0224 13:15:23.117497  939272 main.go:141] libmachine: (cert-expiration-993480) Calling .GetSSHPort
	I0224 13:15:23.117739  939272 main.go:141] libmachine: (cert-expiration-993480) Calling .GetSSHKeyPath
	I0224 13:15:23.117908  939272 main.go:141] libmachine: (cert-expiration-993480) Calling .GetSSHKeyPath
	I0224 13:15:23.118087  939272 main.go:141] libmachine: (cert-expiration-993480) Calling .GetSSHUsername
	I0224 13:15:23.118281  939272 main.go:141] libmachine: Using SSH client type: native
	I0224 13:15:23.118586  939272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.61.171 22 <nil> <nil>}
	I0224 13:15:23.118599  939272 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-993480 && echo "cert-expiration-993480" | sudo tee /etc/hostname
	I0224 13:15:23.271845  939272 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-993480
	
	I0224 13:15:23.271901  939272 main.go:141] libmachine: (cert-expiration-993480) Calling .GetSSHHostname
	I0224 13:15:23.275545  939272 main.go:141] libmachine: (cert-expiration-993480) DBG | domain cert-expiration-993480 has defined MAC address 52:54:00:4c:e9:18 in network mk-cert-expiration-993480
	I0224 13:15:23.276033  939272 main.go:141] libmachine: (cert-expiration-993480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e9:18", ip: ""} in network mk-cert-expiration-993480: {Iface:virbr1 ExpiryTime:2025-02-24 14:11:54 +0000 UTC Type:0 Mac:52:54:00:4c:e9:18 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:cert-expiration-993480 Clientid:01:52:54:00:4c:e9:18}
	I0224 13:15:23.276058  939272 main.go:141] libmachine: (cert-expiration-993480) DBG | domain cert-expiration-993480 has defined IP address 192.168.61.171 and MAC address 52:54:00:4c:e9:18 in network mk-cert-expiration-993480
	I0224 13:15:23.276370  939272 main.go:141] libmachine: (cert-expiration-993480) Calling .GetSSHPort
	I0224 13:15:23.276585  939272 main.go:141] libmachine: (cert-expiration-993480) Calling .GetSSHKeyPath
	I0224 13:15:23.276815  939272 main.go:141] libmachine: (cert-expiration-993480) Calling .GetSSHKeyPath
	I0224 13:15:23.277000  939272 main.go:141] libmachine: (cert-expiration-993480) Calling .GetSSHUsername
	I0224 13:15:23.277258  939272 main.go:141] libmachine: Using SSH client type: native
	I0224 13:15:23.277537  939272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.61.171 22 <nil> <nil>}
	I0224 13:15:23.277555  939272 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-993480' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-993480/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-993480' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0224 13:15:23.397598  939272 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0224 13:15:23.397622  939272 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20451-887294/.minikube CaCertPath:/home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20451-887294/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20451-887294/.minikube}
	I0224 13:15:23.397648  939272 buildroot.go:174] setting up certificates
	I0224 13:15:23.397671  939272 provision.go:84] configureAuth start
	I0224 13:15:23.397695  939272 main.go:141] libmachine: (cert-expiration-993480) Calling .GetMachineName
	I0224 13:15:23.398030  939272 main.go:141] libmachine: (cert-expiration-993480) Calling .GetIP
	I0224 13:15:23.401492  939272 main.go:141] libmachine: (cert-expiration-993480) DBG | domain cert-expiration-993480 has defined MAC address 52:54:00:4c:e9:18 in network mk-cert-expiration-993480
	I0224 13:15:23.401903  939272 main.go:141] libmachine: (cert-expiration-993480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e9:18", ip: ""} in network mk-cert-expiration-993480: {Iface:virbr1 ExpiryTime:2025-02-24 14:11:54 +0000 UTC Type:0 Mac:52:54:00:4c:e9:18 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:cert-expiration-993480 Clientid:01:52:54:00:4c:e9:18}
	I0224 13:15:23.401922  939272 main.go:141] libmachine: (cert-expiration-993480) DBG | domain cert-expiration-993480 has defined IP address 192.168.61.171 and MAC address 52:54:00:4c:e9:18 in network mk-cert-expiration-993480
	I0224 13:15:23.402177  939272 main.go:141] libmachine: (cert-expiration-993480) Calling .GetSSHHostname
	I0224 13:15:23.405408  939272 main.go:141] libmachine: (cert-expiration-993480) DBG | domain cert-expiration-993480 has defined MAC address 52:54:00:4c:e9:18 in network mk-cert-expiration-993480
	I0224 13:15:23.405885  939272 main.go:141] libmachine: (cert-expiration-993480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e9:18", ip: ""} in network mk-cert-expiration-993480: {Iface:virbr1 ExpiryTime:2025-02-24 14:11:54 +0000 UTC Type:0 Mac:52:54:00:4c:e9:18 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:cert-expiration-993480 Clientid:01:52:54:00:4c:e9:18}
	I0224 13:15:23.405914  939272 main.go:141] libmachine: (cert-expiration-993480) DBG | domain cert-expiration-993480 has defined IP address 192.168.61.171 and MAC address 52:54:00:4c:e9:18 in network mk-cert-expiration-993480
	I0224 13:15:23.406166  939272 provision.go:143] copyHostCerts
	I0224 13:15:23.406282  939272 exec_runner.go:144] found /home/jenkins/minikube-integration/20451-887294/.minikube/ca.pem, removing ...
	I0224 13:15:23.406292  939272 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20451-887294/.minikube/ca.pem
	I0224 13:15:23.406382  939272 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20451-887294/.minikube/ca.pem (1082 bytes)
	I0224 13:15:23.406542  939272 exec_runner.go:144] found /home/jenkins/minikube-integration/20451-887294/.minikube/cert.pem, removing ...
	I0224 13:15:23.406548  939272 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20451-887294/.minikube/cert.pem
	I0224 13:15:23.406579  939272 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20451-887294/.minikube/cert.pem (1123 bytes)
	I0224 13:15:23.406666  939272 exec_runner.go:144] found /home/jenkins/minikube-integration/20451-887294/.minikube/key.pem, removing ...
	I0224 13:15:23.406671  939272 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20451-887294/.minikube/key.pem
	I0224 13:15:23.406703  939272 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20451-887294/.minikube/key.pem (1679 bytes)
	I0224 13:15:23.406789  939272 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20451-887294/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-993480 san=[127.0.0.1 192.168.61.171 cert-expiration-993480 localhost minikube]
	I0224 13:15:23.500600  939272 provision.go:177] copyRemoteCerts
	I0224 13:15:23.500649  939272 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0224 13:15:23.500676  939272 main.go:141] libmachine: (cert-expiration-993480) Calling .GetSSHHostname
	I0224 13:15:23.504177  939272 main.go:141] libmachine: (cert-expiration-993480) DBG | domain cert-expiration-993480 has defined MAC address 52:54:00:4c:e9:18 in network mk-cert-expiration-993480
	I0224 13:15:23.504590  939272 main.go:141] libmachine: (cert-expiration-993480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e9:18", ip: ""} in network mk-cert-expiration-993480: {Iface:virbr1 ExpiryTime:2025-02-24 14:11:54 +0000 UTC Type:0 Mac:52:54:00:4c:e9:18 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:cert-expiration-993480 Clientid:01:52:54:00:4c:e9:18}
	I0224 13:15:23.504606  939272 main.go:141] libmachine: (cert-expiration-993480) DBG | domain cert-expiration-993480 has defined IP address 192.168.61.171 and MAC address 52:54:00:4c:e9:18 in network mk-cert-expiration-993480
	I0224 13:15:23.504934  939272 main.go:141] libmachine: (cert-expiration-993480) Calling .GetSSHPort
	I0224 13:15:23.505200  939272 main.go:141] libmachine: (cert-expiration-993480) Calling .GetSSHKeyPath
	I0224 13:15:23.505427  939272 main.go:141] libmachine: (cert-expiration-993480) Calling .GetSSHUsername
	I0224 13:15:23.505656  939272 sshutil.go:53] new ssh client: &{IP:192.168.61.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/cert-expiration-993480/id_rsa Username:docker}
	I0224 13:15:23.602024  939272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0224 13:15:23.636653  939272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0224 13:15:23.675165  939272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0224 13:15:23.712088  939272 provision.go:87] duration metric: took 314.403981ms to configureAuth
	I0224 13:15:23.712112  939272 buildroot.go:189] setting minikube options for container-runtime
	I0224 13:15:23.712347  939272 config.go:182] Loaded profile config "cert-expiration-993480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0224 13:15:23.712458  939272 main.go:141] libmachine: (cert-expiration-993480) Calling .GetSSHHostname
	I0224 13:15:23.719787  939272 main.go:141] libmachine: (cert-expiration-993480) DBG | domain cert-expiration-993480 has defined MAC address 52:54:00:4c:e9:18 in network mk-cert-expiration-993480
	I0224 13:15:23.720249  939272 main.go:141] libmachine: (cert-expiration-993480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e9:18", ip: ""} in network mk-cert-expiration-993480: {Iface:virbr1 ExpiryTime:2025-02-24 14:11:54 +0000 UTC Type:0 Mac:52:54:00:4c:e9:18 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:cert-expiration-993480 Clientid:01:52:54:00:4c:e9:18}
	I0224 13:15:23.720276  939272 main.go:141] libmachine: (cert-expiration-993480) DBG | domain cert-expiration-993480 has defined IP address 192.168.61.171 and MAC address 52:54:00:4c:e9:18 in network mk-cert-expiration-993480
	I0224 13:15:23.720509  939272 main.go:141] libmachine: (cert-expiration-993480) Calling .GetSSHPort
	I0224 13:15:23.720731  939272 main.go:141] libmachine: (cert-expiration-993480) Calling .GetSSHKeyPath
	I0224 13:15:23.724282  939272 main.go:141] libmachine: (cert-expiration-993480) Calling .GetSSHKeyPath
	I0224 13:15:23.724469  939272 main.go:141] libmachine: (cert-expiration-993480) Calling .GetSSHUsername
	I0224 13:15:23.724632  939272 main.go:141] libmachine: Using SSH client type: native
	I0224 13:15:23.724869  939272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.61.171 22 <nil> <nil>}
	I0224 13:15:23.724885  939272 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0224 13:15:27.804213  938996 addons.go:514] duration metric: took 1.365796558s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0224 13:15:27.804264  938996 start.go:246] waiting for cluster config update ...
	I0224 13:15:27.804280  938996 start.go:255] writing updated cluster config ...
	I0224 13:15:27.804600  938996 ssh_runner.go:195] Run: rm -f paused
	I0224 13:15:27.864512  938996 start.go:600] kubectl: 1.32.2, cluster: 1.32.2 (minor skew: 0)
	I0224 13:15:27.866347  938996 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-973775" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 24 13:15:28 kubernetes-upgrade-973775 crio[2599]: time="2025-02-24 13:15:28.769251134Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ee9120ea-a251-4f38-97c0-61419efff71b name=/runtime.v1.RuntimeService/Version
	Feb 24 13:15:28 kubernetes-upgrade-973775 crio[2599]: time="2025-02-24 13:15:28.770807023Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=56a1f294-2937-4045-a0cc-fe7724d8e4ca name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 13:15:28 kubernetes-upgrade-973775 crio[2599]: time="2025-02-24 13:15:28.771860234Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1740402928771203907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=56a1f294-2937-4045-a0cc-fe7724d8e4ca name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 13:15:28 kubernetes-upgrade-973775 crio[2599]: time="2025-02-24 13:15:28.775535711Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3dd9af48-0bc6-498e-bc8c-019867864e97 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:15:28 kubernetes-upgrade-973775 crio[2599]: time="2025-02-24 13:15:28.775699341Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3dd9af48-0bc6-498e-bc8c-019867864e97 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:15:28 kubernetes-upgrade-973775 crio[2599]: time="2025-02-24 13:15:28.776239542Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9059551d199e22cb6f11408cac909e1c898236743eea8cb810ac6a926efa2a2d,PodSandboxId:74e1d7efb21bbbf2e30915891daa7e8982bac99f162957d8a2a01eca307c1ab9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1740402925709204925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-kxnpn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2f974f0-ba78-429c-98a4-64e4c5314321,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c1589f0bd01907b1133aa4281f92418e962696b4624c926f35ea1229c5fcb4a,PodSandboxId:8dcbcd215e4c50e3620694bd9328bb7953d9dd1d7b8c6f0efadb10197ab80c46,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1740402925653083633,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-bt28c,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: dbac42b2-2394-451f-b54f-1d9ec44ac4e1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:902bfe7dea076fd5b72c3a66b7fd9f85e28fed6d96ac3b27fddb5a2a958a362a,PodSandboxId:dfbf2051fbb79ac8b6c9a79e9ebd266f7f135324e3f04d9c4a41a13a16106983,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAIN
ER_RUNNING,CreatedAt:1740402925083495102,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g7vf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfc61df1-27c0-42a0-9160-97066d10ef0b,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd199914cc9898219762ee7fd699d705273da1752499fd4378f607826c63a976,PodSandboxId:1b1cd129c50b1b2c705865913be0abd1d910cd0f4e14a38a59d136c7ce952a3e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1740
402925046225916,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f6f5758-7842-400d-9bc7-d4e2d0226484,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffd4c3090b23e638630257de27c8ea19e9a122513080d9fcb474d329777d397f,PodSandboxId:0eff280098ac53cdb3813efe8b540bdc6d08aff26e36b3e0faac796e6b72e1f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1740402921255198679,L
abels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-973775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6eb33e67a80670e8dbe67b453164e67,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d6a90aa70656b62b463e8305f585feacd358be80c71b970c11e88c2599b0241,PodSandboxId:e8b18ce55d764219d1383a18ae2dc61a265688e0ce02295f3829a57970303fa1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1740402921239317229,Labels
:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-973775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be2a848b940e9139fa9dff79a4ba56d1,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cf3945c2206349d3ccfe775dee4c899010461d23541c048697d7648c4eb9750,PodSandboxId:cdd6f79722bdd9432bfed56505312e90429b27f0d4c66bd62a452038912d6a77,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1740402920993275459,La
bels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-973775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17de3d096cf285bd7b5218fc85665263,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57e03b3e1c9572b9f6864dce645f671e260f17049329a539f967144e6f72904d,PodSandboxId:841c77b3bd833bc604aa1bdb1f264b5f0505a3f3dd67002d988c651aa480a822,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1740402918514316547
,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-973775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e96549e2a1ac22407b819e1180728e0d,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d0212232a15e750990faddc172f1a8b990240fbd1c884ce01dc7260d3f8a51,PodSandboxId:f6f3e5a0f87f43a76c2468b1c6a6aacc8c518cd957847c962a5bc01e2a715c6a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1740402891290552542,Labels:map[string]st
ring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f6f5758-7842-400d-9bc7-d4e2d0226484,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdbec94c61673d153fa1a6e8c90f2c2a92cc23e153535470ba4b19f73bdfa26a,PodSandboxId:6539d54d54bb0f69aa1459f8934f42d0c98f9962c079bc8c9fa767c122d4e70d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1740402891050470634,Labels:map[string]string{io.kubernetes.con
tainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g7vf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfc61df1-27c0-42a0-9160-97066d10ef0b,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:852b4f28cf8fefeea4c488718f29f68bb96bf757e6a66cad517d22fe1e297e9c,PodSandboxId:9e39fccdd987596c8f149bcc26f34a9880f86994b19b2deb7f891071769dadf2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1740402890972845153,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod
.name: coredns-668d6bf9bc-kxnpn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2f974f0-ba78-429c-98a4-64e4c5314321,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:420e286dc7a4f68fdbaf399c142ce206f1f6807d7d07b0aa99b026f9247039a4,PodSandboxId:9e1bb1d86f9a1065e5e28130cba08ba094e1f8805dd0e50708e549f1e9201057,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69f
a2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1740402890904598133,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-bt28c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbac42b2-2394-451f-b54f-1d9ec44ac4e1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40dc538b8106d0cb1e0bd6e6277ab309570efbcdd379ae63a710d1dcd86a3863,PodSandboxId:82c9ff59df3bef2dd0d13103c2f0c7c3ba6653823456283c597c294465fe5776,Metadata:&ContainerMetadata{Na
me:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1740402880958843674,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-973775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17de3d096cf285bd7b5218fc85665263,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bb133f6a21608fac23e96e7920a1a8c807f3c1e0fa0c347e73e4802ef68971c,PodSandboxId:5e5094e2311e1116ac935c918a3e81e0773dfd46e1804a457fbbcadaf3bd4a53,Metadata:
&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1740402878928794678,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-973775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be2a848b940e9139fa9dff79a4ba56d1,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b999ffc97fd56092e26ee7b6fe594f6d32869783d68352a9308cc2580e4ce17b,PodSandboxId:c011351d3bccf5fb93d54f7ea2b31c04b81f66f3410714689efe1577db5b96ef,Metadata:&Conta
inerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1740402877097590974,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-973775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e96549e2a1ac22407b819e1180728e0d,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74aa004fcd6affa1000a3e1cf470967e20815f56e85b6928b10f42ae5c41325a,PodSandboxId:4d74623382907a049444b8dd58de3e2c9cae0de462438b291ecc87bf53bfa77d,Metadata:&ContainerMetadata{Name:kube-scheduler,Att
empt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1740402857471932715,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-973775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6eb33e67a80670e8dbe67b453164e67,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3dd9af48-0bc6-498e-bc8c-019867864e97 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:15:28 kubernetes-upgrade-973775 crio[2599]: time="2025-02-24 13:15:28.780854690Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=6de68902-1291-4d16-800a-32e7803bad62 name=/runtime.v1.RuntimeService/ListPodSandbox
	Feb 24 13:15:28 kubernetes-upgrade-973775 crio[2599]: time="2025-02-24 13:15:28.781362102Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:74e1d7efb21bbbf2e30915891daa7e8982bac99f162957d8a2a01eca307c1ab9,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-kxnpn,Uid:e2f974f0-ba78-429c-98a4-64e4c5314321,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1740402924990527672,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-kxnpn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2f974f0-ba78-429c-98a4-64e4c5314321,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-24T13:15:24.512811917Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8dcbcd215e4c50e3620694bd9328bb7953d9dd1d7b8c6f0efadb10197ab80c46,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-bt28c,Uid:dbac42b2-2394-451f-b54f-1d9ec44ac4e1,Namespac
e:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1740402924972188595,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-bt28c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbac42b2-2394-451f-b54f-1d9ec44ac4e1,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-24T13:15:24.512809077Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dfbf2051fbb79ac8b6c9a79e9ebd266f7f135324e3f04d9c4a41a13a16106983,Metadata:&PodSandboxMetadata{Name:kube-proxy-g7vf8,Uid:cfc61df1-27c0-42a0-9160-97066d10ef0b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1740402924860922038,Labels:map[string]string{controller-revision-hash: 7bb84c4984,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-g7vf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfc61df1-27c0-42a0-9160-97066d10ef0b,k8s-app: kube-proxy,pod-template-generation: 1,},Annot
ations:map[string]string{kubernetes.io/config.seen: 2025-02-24T13:15:24.512804053Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1b1cd129c50b1b2c705865913be0abd1d910cd0f4e14a38a59d136c7ce952a3e,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:3f6f5758-7842-400d-9bc7-d4e2d0226484,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1740402924823538847,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f6f5758-7842-400d-9bc7-d4e2d0226484,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"conta
iners\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-02-24T13:15:24.512807499Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0eff280098ac53cdb3813efe8b540bdc6d08aff26e36b3e0faac796e6b72e1f8,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-973775,Uid:d6eb33e67a80670e8dbe67b453164e67,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1740402920996781077,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-973775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6eb33e67a80670e8dbe67b
453164e67,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d6eb33e67a80670e8dbe67b453164e67,kubernetes.io/config.seen: 2025-02-24T13:15:20.505621141Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e8b18ce55d764219d1383a18ae2dc61a265688e0ce02295f3829a57970303fa1,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-973775,Uid:be2a848b940e9139fa9dff79a4ba56d1,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1740402920976043923,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-973775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be2a848b940e9139fa9dff79a4ba56d1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.35:8443,kubernetes.io/config.hash: be2a848b940e9139fa9dff79a4ba56d1,kubernetes.io/config.seen: 2025-02-24T13:15:20.505619027Z,kubernetes.io
/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:841c77b3bd833bc604aa1bdb1f264b5f0505a3f3dd67002d988c651aa480a822,Metadata:&PodSandboxMetadata{Name:etcd-kubernetes-upgrade-973775,Uid:e96549e2a1ac22407b819e1180728e0d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1740402918366605976,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-kubernetes-upgrade-973775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e96549e2a1ac22407b819e1180728e0d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.35:2379,kubernetes.io/config.hash: e96549e2a1ac22407b819e1180728e0d,kubernetes.io/config.seen: 2025-02-24T13:14:36.693549668Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cdd6f79722bdd9432bfed56505312e90429b27f0d4c66bd62a452038912d6a77,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-kubernetes-upgrade-973775,Uid:17de3d096cf285bd7b521
8fc85665263,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1740402918355885977,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-973775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17de3d096cf285bd7b5218fc85665263,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 17de3d096cf285bd7b5218fc85665263,kubernetes.io/config.seen: 2025-02-24T13:14:16.699919271Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f6f3e5a0f87f43a76c2468b1c6a6aacc8c518cd957847c962a5bc01e2a715c6a,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:3f6f5758-7842-400d-9bc7-d4e2d0226484,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1740402890977402563,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provi
sioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f6f5758-7842-400d-9bc7-d4e2d0226484,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-02-24T13:14:49.162719939Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6539d54d54bb0f69aa1459f8934f42d0c98f9962c079bc
8c9fa767c122d4e70d,Metadata:&PodSandboxMetadata{Name:kube-proxy-g7vf8,Uid:cfc61df1-27c0-42a0-9160-97066d10ef0b,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1740402890757777559,Labels:map[string]string{controller-revision-hash: 7bb84c4984,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-g7vf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfc61df1-27c0-42a0-9160-97066d10ef0b,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-24T13:14:49.243754986Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9e39fccdd987596c8f149bcc26f34a9880f86994b19b2deb7f891071769dadf2,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-kxnpn,Uid:e2f974f0-ba78-429c-98a4-64e4c5314321,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1740402890531054359,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-kxnpn,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: e2f974f0-ba78-429c-98a4-64e4c5314321,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-24T13:14:49.611478782Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9e1bb1d86f9a1065e5e28130cba08ba094e1f8805dd0e50708e549f1e9201057,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-bt28c,Uid:dbac42b2-2394-451f-b54f-1d9ec44ac4e1,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1740402890492334551,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-bt28c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbac42b2-2394-451f-b54f-1d9ec44ac4e1,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-24T13:14:49.585022537Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c011351d3bccf5fb93d54f7ea2b31c04b81f66f3410714689efe1577db5b96ef,Metadata
:&PodSandboxMetadata{Name:etcd-kubernetes-upgrade-973775,Uid:e96549e2a1ac22407b819e1180728e0d,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1740402877006165294,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-kubernetes-upgrade-973775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e96549e2a1ac22407b819e1180728e0d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.35:2379,kubernetes.io/config.hash: e96549e2a1ac22407b819e1180728e0d,kubernetes.io/config.seen: 2025-02-24T13:14:36.693549668Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5e5094e2311e1116ac935c918a3e81e0773dfd46e1804a457fbbcadaf3bd4a53,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-973775,Uid:be2a848b940e9139fa9dff79a4ba56d1,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1740402857232265353,Labels:map[string]string{component:
kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-973775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be2a848b940e9139fa9dff79a4ba56d1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.35:8443,kubernetes.io/config.hash: be2a848b940e9139fa9dff79a4ba56d1,kubernetes.io/config.seen: 2025-02-24T13:14:16.699917413Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:82c9ff59df3bef2dd0d13103c2f0c7c3ba6653823456283c597c294465fe5776,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-kubernetes-upgrade-973775,Uid:17de3d096cf285bd7b5218fc85665263,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1740402857228320681,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-973775,io.kubernetes.pod.namespace: kube-system,io.kuber
netes.pod.uid: 17de3d096cf285bd7b5218fc85665263,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 17de3d096cf285bd7b5218fc85665263,kubernetes.io/config.seen: 2025-02-24T13:14:16.699919271Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4d74623382907a049444b8dd58de3e2c9cae0de462438b291ecc87bf53bfa77d,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-973775,Uid:d6eb33e67a80670e8dbe67b453164e67,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1740402857227733229,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-973775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6eb33e67a80670e8dbe67b453164e67,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d6eb33e67a80670e8dbe67b453164e67,kubernetes.io/config.seen: 2025-02-24T13:14:16.699910501Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},
}" file="otel-collector/interceptors.go:74" id=6de68902-1291-4d16-800a-32e7803bad62 name=/runtime.v1.RuntimeService/ListPodSandbox
	Feb 24 13:15:28 kubernetes-upgrade-973775 crio[2599]: time="2025-02-24 13:15:28.783381619Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e45fa913-8990-495b-956a-ad29ee5334ba name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:15:28 kubernetes-upgrade-973775 crio[2599]: time="2025-02-24 13:15:28.783522991Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e45fa913-8990-495b-956a-ad29ee5334ba name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:15:28 kubernetes-upgrade-973775 crio[2599]: time="2025-02-24 13:15:28.783859978Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9059551d199e22cb6f11408cac909e1c898236743eea8cb810ac6a926efa2a2d,PodSandboxId:74e1d7efb21bbbf2e30915891daa7e8982bac99f162957d8a2a01eca307c1ab9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1740402925709204925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-kxnpn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2f974f0-ba78-429c-98a4-64e4c5314321,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c1589f0bd01907b1133aa4281f92418e962696b4624c926f35ea1229c5fcb4a,PodSandboxId:8dcbcd215e4c50e3620694bd9328bb7953d9dd1d7b8c6f0efadb10197ab80c46,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1740402925653083633,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-bt28c,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: dbac42b2-2394-451f-b54f-1d9ec44ac4e1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:902bfe7dea076fd5b72c3a66b7fd9f85e28fed6d96ac3b27fddb5a2a958a362a,PodSandboxId:dfbf2051fbb79ac8b6c9a79e9ebd266f7f135324e3f04d9c4a41a13a16106983,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAIN
ER_RUNNING,CreatedAt:1740402925083495102,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g7vf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfc61df1-27c0-42a0-9160-97066d10ef0b,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd199914cc9898219762ee7fd699d705273da1752499fd4378f607826c63a976,PodSandboxId:1b1cd129c50b1b2c705865913be0abd1d910cd0f4e14a38a59d136c7ce952a3e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1740
402925046225916,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f6f5758-7842-400d-9bc7-d4e2d0226484,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffd4c3090b23e638630257de27c8ea19e9a122513080d9fcb474d329777d397f,PodSandboxId:0eff280098ac53cdb3813efe8b540bdc6d08aff26e36b3e0faac796e6b72e1f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1740402921255198679,L
abels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-973775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6eb33e67a80670e8dbe67b453164e67,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d6a90aa70656b62b463e8305f585feacd358be80c71b970c11e88c2599b0241,PodSandboxId:e8b18ce55d764219d1383a18ae2dc61a265688e0ce02295f3829a57970303fa1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1740402921239317229,Labels
:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-973775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be2a848b940e9139fa9dff79a4ba56d1,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cf3945c2206349d3ccfe775dee4c899010461d23541c048697d7648c4eb9750,PodSandboxId:cdd6f79722bdd9432bfed56505312e90429b27f0d4c66bd62a452038912d6a77,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1740402920993275459,La
bels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-973775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17de3d096cf285bd7b5218fc85665263,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57e03b3e1c9572b9f6864dce645f671e260f17049329a539f967144e6f72904d,PodSandboxId:841c77b3bd833bc604aa1bdb1f264b5f0505a3f3dd67002d988c651aa480a822,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1740402918514316547
,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-973775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e96549e2a1ac22407b819e1180728e0d,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d0212232a15e750990faddc172f1a8b990240fbd1c884ce01dc7260d3f8a51,PodSandboxId:f6f3e5a0f87f43a76c2468b1c6a6aacc8c518cd957847c962a5bc01e2a715c6a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1740402891290552542,Labels:map[string]st
ring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f6f5758-7842-400d-9bc7-d4e2d0226484,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdbec94c61673d153fa1a6e8c90f2c2a92cc23e153535470ba4b19f73bdfa26a,PodSandboxId:6539d54d54bb0f69aa1459f8934f42d0c98f9962c079bc8c9fa767c122d4e70d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1740402891050470634,Labels:map[string]string{io.kubernetes.con
tainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g7vf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfc61df1-27c0-42a0-9160-97066d10ef0b,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:852b4f28cf8fefeea4c488718f29f68bb96bf757e6a66cad517d22fe1e297e9c,PodSandboxId:9e39fccdd987596c8f149bcc26f34a9880f86994b19b2deb7f891071769dadf2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1740402890972845153,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod
.name: coredns-668d6bf9bc-kxnpn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2f974f0-ba78-429c-98a4-64e4c5314321,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:420e286dc7a4f68fdbaf399c142ce206f1f6807d7d07b0aa99b026f9247039a4,PodSandboxId:9e1bb1d86f9a1065e5e28130cba08ba094e1f8805dd0e50708e549f1e9201057,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69f
a2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1740402890904598133,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-bt28c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbac42b2-2394-451f-b54f-1d9ec44ac4e1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40dc538b8106d0cb1e0bd6e6277ab309570efbcdd379ae63a710d1dcd86a3863,PodSandboxId:82c9ff59df3bef2dd0d13103c2f0c7c3ba6653823456283c597c294465fe5776,Metadata:&ContainerMetadata{Na
me:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1740402880958843674,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-973775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17de3d096cf285bd7b5218fc85665263,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bb133f6a21608fac23e96e7920a1a8c807f3c1e0fa0c347e73e4802ef68971c,PodSandboxId:5e5094e2311e1116ac935c918a3e81e0773dfd46e1804a457fbbcadaf3bd4a53,Metadata:
&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1740402878928794678,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-973775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be2a848b940e9139fa9dff79a4ba56d1,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b999ffc97fd56092e26ee7b6fe594f6d32869783d68352a9308cc2580e4ce17b,PodSandboxId:c011351d3bccf5fb93d54f7ea2b31c04b81f66f3410714689efe1577db5b96ef,Metadata:&Conta
inerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1740402877097590974,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-973775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e96549e2a1ac22407b819e1180728e0d,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74aa004fcd6affa1000a3e1cf470967e20815f56e85b6928b10f42ae5c41325a,PodSandboxId:4d74623382907a049444b8dd58de3e2c9cae0de462438b291ecc87bf53bfa77d,Metadata:&ContainerMetadata{Name:kube-scheduler,Att
empt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1740402857471932715,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-973775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6eb33e67a80670e8dbe67b453164e67,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e45fa913-8990-495b-956a-ad29ee5334ba name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:15:28 kubernetes-upgrade-973775 crio[2599]: time="2025-02-24 13:15:28.838134098Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f55626c2-86fb-4c0e-a4fc-621c5227270e name=/runtime.v1.RuntimeService/Version
	Feb 24 13:15:28 kubernetes-upgrade-973775 crio[2599]: time="2025-02-24 13:15:28.838234621Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f55626c2-86fb-4c0e-a4fc-621c5227270e name=/runtime.v1.RuntimeService/Version
	Feb 24 13:15:28 kubernetes-upgrade-973775 crio[2599]: time="2025-02-24 13:15:28.839529238Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0796a29c-c850-4f28-ad8c-a6d5d1565ed3 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 13:15:28 kubernetes-upgrade-973775 crio[2599]: time="2025-02-24 13:15:28.839925398Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1740402928839902957,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0796a29c-c850-4f28-ad8c-a6d5d1565ed3 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 13:15:28 kubernetes-upgrade-973775 crio[2599]: time="2025-02-24 13:15:28.840856864Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ac32cc22-8056-4023-8e6c-6e0615719425 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:15:28 kubernetes-upgrade-973775 crio[2599]: time="2025-02-24 13:15:28.840940548Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ac32cc22-8056-4023-8e6c-6e0615719425 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:15:28 kubernetes-upgrade-973775 crio[2599]: time="2025-02-24 13:15:28.841325393Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9059551d199e22cb6f11408cac909e1c898236743eea8cb810ac6a926efa2a2d,PodSandboxId:74e1d7efb21bbbf2e30915891daa7e8982bac99f162957d8a2a01eca307c1ab9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1740402925709204925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-kxnpn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2f974f0-ba78-429c-98a4-64e4c5314321,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c1589f0bd01907b1133aa4281f92418e962696b4624c926f35ea1229c5fcb4a,PodSandboxId:8dcbcd215e4c50e3620694bd9328bb7953d9dd1d7b8c6f0efadb10197ab80c46,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1740402925653083633,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-bt28c,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: dbac42b2-2394-451f-b54f-1d9ec44ac4e1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:902bfe7dea076fd5b72c3a66b7fd9f85e28fed6d96ac3b27fddb5a2a958a362a,PodSandboxId:dfbf2051fbb79ac8b6c9a79e9ebd266f7f135324e3f04d9c4a41a13a16106983,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAIN
ER_RUNNING,CreatedAt:1740402925083495102,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g7vf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfc61df1-27c0-42a0-9160-97066d10ef0b,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd199914cc9898219762ee7fd699d705273da1752499fd4378f607826c63a976,PodSandboxId:1b1cd129c50b1b2c705865913be0abd1d910cd0f4e14a38a59d136c7ce952a3e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1740
402925046225916,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f6f5758-7842-400d-9bc7-d4e2d0226484,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffd4c3090b23e638630257de27c8ea19e9a122513080d9fcb474d329777d397f,PodSandboxId:0eff280098ac53cdb3813efe8b540bdc6d08aff26e36b3e0faac796e6b72e1f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1740402921255198679,L
abels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-973775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6eb33e67a80670e8dbe67b453164e67,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d6a90aa70656b62b463e8305f585feacd358be80c71b970c11e88c2599b0241,PodSandboxId:e8b18ce55d764219d1383a18ae2dc61a265688e0ce02295f3829a57970303fa1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1740402921239317229,Labels
:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-973775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be2a848b940e9139fa9dff79a4ba56d1,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cf3945c2206349d3ccfe775dee4c899010461d23541c048697d7648c4eb9750,PodSandboxId:cdd6f79722bdd9432bfed56505312e90429b27f0d4c66bd62a452038912d6a77,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1740402920993275459,La
bels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-973775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17de3d096cf285bd7b5218fc85665263,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57e03b3e1c9572b9f6864dce645f671e260f17049329a539f967144e6f72904d,PodSandboxId:841c77b3bd833bc604aa1bdb1f264b5f0505a3f3dd67002d988c651aa480a822,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1740402918514316547
,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-973775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e96549e2a1ac22407b819e1180728e0d,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d0212232a15e750990faddc172f1a8b990240fbd1c884ce01dc7260d3f8a51,PodSandboxId:f6f3e5a0f87f43a76c2468b1c6a6aacc8c518cd957847c962a5bc01e2a715c6a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1740402891290552542,Labels:map[string]st
ring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f6f5758-7842-400d-9bc7-d4e2d0226484,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdbec94c61673d153fa1a6e8c90f2c2a92cc23e153535470ba4b19f73bdfa26a,PodSandboxId:6539d54d54bb0f69aa1459f8934f42d0c98f9962c079bc8c9fa767c122d4e70d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1740402891050470634,Labels:map[string]string{io.kubernetes.con
tainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g7vf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfc61df1-27c0-42a0-9160-97066d10ef0b,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:852b4f28cf8fefeea4c488718f29f68bb96bf757e6a66cad517d22fe1e297e9c,PodSandboxId:9e39fccdd987596c8f149bcc26f34a9880f86994b19b2deb7f891071769dadf2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1740402890972845153,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod
.name: coredns-668d6bf9bc-kxnpn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2f974f0-ba78-429c-98a4-64e4c5314321,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:420e286dc7a4f68fdbaf399c142ce206f1f6807d7d07b0aa99b026f9247039a4,PodSandboxId:9e1bb1d86f9a1065e5e28130cba08ba094e1f8805dd0e50708e549f1e9201057,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69f
a2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1740402890904598133,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-bt28c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbac42b2-2394-451f-b54f-1d9ec44ac4e1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40dc538b8106d0cb1e0bd6e6277ab309570efbcdd379ae63a710d1dcd86a3863,PodSandboxId:82c9ff59df3bef2dd0d13103c2f0c7c3ba6653823456283c597c294465fe5776,Metadata:&ContainerMetadata{Na
me:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1740402880958843674,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-973775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17de3d096cf285bd7b5218fc85665263,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bb133f6a21608fac23e96e7920a1a8c807f3c1e0fa0c347e73e4802ef68971c,PodSandboxId:5e5094e2311e1116ac935c918a3e81e0773dfd46e1804a457fbbcadaf3bd4a53,Metadata:
&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1740402878928794678,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-973775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be2a848b940e9139fa9dff79a4ba56d1,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b999ffc97fd56092e26ee7b6fe594f6d32869783d68352a9308cc2580e4ce17b,PodSandboxId:c011351d3bccf5fb93d54f7ea2b31c04b81f66f3410714689efe1577db5b96ef,Metadata:&Conta
inerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1740402877097590974,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-973775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e96549e2a1ac22407b819e1180728e0d,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74aa004fcd6affa1000a3e1cf470967e20815f56e85b6928b10f42ae5c41325a,PodSandboxId:4d74623382907a049444b8dd58de3e2c9cae0de462438b291ecc87bf53bfa77d,Metadata:&ContainerMetadata{Name:kube-scheduler,Att
empt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1740402857471932715,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-973775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6eb33e67a80670e8dbe67b453164e67,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ac32cc22-8056-4023-8e6c-6e0615719425 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:15:28 kubernetes-upgrade-973775 crio[2599]: time="2025-02-24 13:15:28.886243457Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9472c75c-9ebf-47b0-86f1-68e00957e548 name=/runtime.v1.RuntimeService/Version
	Feb 24 13:15:28 kubernetes-upgrade-973775 crio[2599]: time="2025-02-24 13:15:28.886342285Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9472c75c-9ebf-47b0-86f1-68e00957e548 name=/runtime.v1.RuntimeService/Version
	Feb 24 13:15:28 kubernetes-upgrade-973775 crio[2599]: time="2025-02-24 13:15:28.888784635Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6b3c5af3-011e-4966-81b5-649794cd2b46 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 13:15:28 kubernetes-upgrade-973775 crio[2599]: time="2025-02-24 13:15:28.890238240Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1740402928890192199,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6b3c5af3-011e-4966-81b5-649794cd2b46 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 13:15:28 kubernetes-upgrade-973775 crio[2599]: time="2025-02-24 13:15:28.891156712Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=27918dde-ecff-42ed-8c82-39fd03bc0340 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:15:28 kubernetes-upgrade-973775 crio[2599]: time="2025-02-24 13:15:28.891234434Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=27918dde-ecff-42ed-8c82-39fd03bc0340 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:15:28 kubernetes-upgrade-973775 crio[2599]: time="2025-02-24 13:15:28.891830590Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9059551d199e22cb6f11408cac909e1c898236743eea8cb810ac6a926efa2a2d,PodSandboxId:74e1d7efb21bbbf2e30915891daa7e8982bac99f162957d8a2a01eca307c1ab9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1740402925709204925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-kxnpn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2f974f0-ba78-429c-98a4-64e4c5314321,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c1589f0bd01907b1133aa4281f92418e962696b4624c926f35ea1229c5fcb4a,PodSandboxId:8dcbcd215e4c50e3620694bd9328bb7953d9dd1d7b8c6f0efadb10197ab80c46,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1740402925653083633,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-bt28c,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: dbac42b2-2394-451f-b54f-1d9ec44ac4e1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:902bfe7dea076fd5b72c3a66b7fd9f85e28fed6d96ac3b27fddb5a2a958a362a,PodSandboxId:dfbf2051fbb79ac8b6c9a79e9ebd266f7f135324e3f04d9c4a41a13a16106983,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAIN
ER_RUNNING,CreatedAt:1740402925083495102,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g7vf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfc61df1-27c0-42a0-9160-97066d10ef0b,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd199914cc9898219762ee7fd699d705273da1752499fd4378f607826c63a976,PodSandboxId:1b1cd129c50b1b2c705865913be0abd1d910cd0f4e14a38a59d136c7ce952a3e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1740
402925046225916,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f6f5758-7842-400d-9bc7-d4e2d0226484,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffd4c3090b23e638630257de27c8ea19e9a122513080d9fcb474d329777d397f,PodSandboxId:0eff280098ac53cdb3813efe8b540bdc6d08aff26e36b3e0faac796e6b72e1f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1740402921255198679,L
abels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-973775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6eb33e67a80670e8dbe67b453164e67,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d6a90aa70656b62b463e8305f585feacd358be80c71b970c11e88c2599b0241,PodSandboxId:e8b18ce55d764219d1383a18ae2dc61a265688e0ce02295f3829a57970303fa1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1740402921239317229,Labels
:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-973775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be2a848b940e9139fa9dff79a4ba56d1,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cf3945c2206349d3ccfe775dee4c899010461d23541c048697d7648c4eb9750,PodSandboxId:cdd6f79722bdd9432bfed56505312e90429b27f0d4c66bd62a452038912d6a77,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1740402920993275459,La
bels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-973775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17de3d096cf285bd7b5218fc85665263,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57e03b3e1c9572b9f6864dce645f671e260f17049329a539f967144e6f72904d,PodSandboxId:841c77b3bd833bc604aa1bdb1f264b5f0505a3f3dd67002d988c651aa480a822,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1740402918514316547
,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-973775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e96549e2a1ac22407b819e1180728e0d,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d0212232a15e750990faddc172f1a8b990240fbd1c884ce01dc7260d3f8a51,PodSandboxId:f6f3e5a0f87f43a76c2468b1c6a6aacc8c518cd957847c962a5bc01e2a715c6a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1740402891290552542,Labels:map[string]st
ring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f6f5758-7842-400d-9bc7-d4e2d0226484,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdbec94c61673d153fa1a6e8c90f2c2a92cc23e153535470ba4b19f73bdfa26a,PodSandboxId:6539d54d54bb0f69aa1459f8934f42d0c98f9962c079bc8c9fa767c122d4e70d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1740402891050470634,Labels:map[string]string{io.kubernetes.con
tainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g7vf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfc61df1-27c0-42a0-9160-97066d10ef0b,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:852b4f28cf8fefeea4c488718f29f68bb96bf757e6a66cad517d22fe1e297e9c,PodSandboxId:9e39fccdd987596c8f149bcc26f34a9880f86994b19b2deb7f891071769dadf2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1740402890972845153,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod
.name: coredns-668d6bf9bc-kxnpn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2f974f0-ba78-429c-98a4-64e4c5314321,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:420e286dc7a4f68fdbaf399c142ce206f1f6807d7d07b0aa99b026f9247039a4,PodSandboxId:9e1bb1d86f9a1065e5e28130cba08ba094e1f8805dd0e50708e549f1e9201057,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69f
a2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1740402890904598133,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-bt28c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbac42b2-2394-451f-b54f-1d9ec44ac4e1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40dc538b8106d0cb1e0bd6e6277ab309570efbcdd379ae63a710d1dcd86a3863,PodSandboxId:82c9ff59df3bef2dd0d13103c2f0c7c3ba6653823456283c597c294465fe5776,Metadata:&ContainerMetadata{Na
me:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1740402880958843674,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-973775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17de3d096cf285bd7b5218fc85665263,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bb133f6a21608fac23e96e7920a1a8c807f3c1e0fa0c347e73e4802ef68971c,PodSandboxId:5e5094e2311e1116ac935c918a3e81e0773dfd46e1804a457fbbcadaf3bd4a53,Metadata:
&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1740402878928794678,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-973775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be2a848b940e9139fa9dff79a4ba56d1,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b999ffc97fd56092e26ee7b6fe594f6d32869783d68352a9308cc2580e4ce17b,PodSandboxId:c011351d3bccf5fb93d54f7ea2b31c04b81f66f3410714689efe1577db5b96ef,Metadata:&Conta
inerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1740402877097590974,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-973775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e96549e2a1ac22407b819e1180728e0d,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74aa004fcd6affa1000a3e1cf470967e20815f56e85b6928b10f42ae5c41325a,PodSandboxId:4d74623382907a049444b8dd58de3e2c9cae0de462438b291ecc87bf53bfa77d,Metadata:&ContainerMetadata{Name:kube-scheduler,Att
empt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1740402857471932715,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-973775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6eb33e67a80670e8dbe67b453164e67,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=27918dde-ecff-42ed-8c82-39fd03bc0340 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	9059551d199e2       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   3 seconds ago        Running             coredns                   1                   74e1d7efb21bb       coredns-668d6bf9bc-kxnpn
	2c1589f0bd019       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   3 seconds ago        Running             coredns                   1                   8dcbcd215e4c5       coredns-668d6bf9bc-bt28c
	902bfe7dea076       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   3 seconds ago        Running             kube-proxy                1                   dfbf2051fbb79       kube-proxy-g7vf8
	dd199914cc989       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago        Running             storage-provisioner       1                   1b1cd129c50b1       storage-provisioner
	ffd4c3090b23e       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   7 seconds ago        Running             kube-scheduler            1                   0eff280098ac5       kube-scheduler-kubernetes-upgrade-973775
	0d6a90aa70656       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   7 seconds ago        Running             kube-apiserver            2                   e8b18ce55d764       kube-apiserver-kubernetes-upgrade-973775
	9cf3945c22063       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   8 seconds ago        Running             kube-controller-manager   2                   cdd6f79722bdd       kube-controller-manager-kubernetes-upgrade-973775
	57e03b3e1c957       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   10 seconds ago       Running             etcd                      1                   841c77b3bd833       etcd-kubernetes-upgrade-973775
	82d0212232a15       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   37 seconds ago       Exited              storage-provisioner       0                   f6f3e5a0f87f4       storage-provisioner
	fdbec94c61673       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   37 seconds ago       Exited              kube-proxy                0                   6539d54d54bb0       kube-proxy-g7vf8
	852b4f28cf8fe       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   38 seconds ago       Exited              coredns                   0                   9e39fccdd9875       coredns-668d6bf9bc-kxnpn
	420e286dc7a4f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   38 seconds ago       Exited              coredns                   0                   9e1bb1d86f9a1       coredns-668d6bf9bc-bt28c
	40dc538b8106d       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   48 seconds ago       Exited              kube-controller-manager   1                   82c9ff59df3be       kube-controller-manager-kubernetes-upgrade-973775
	2bb133f6a2160       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   50 seconds ago       Exited              kube-apiserver            1                   5e5094e2311e1       kube-apiserver-kubernetes-upgrade-973775
	b999ffc97fd56       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   51 seconds ago       Exited              etcd                      0                   c011351d3bccf       etcd-kubernetes-upgrade-973775
	74aa004fcd6af       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   About a minute ago   Exited              kube-scheduler            0                   4d74623382907       kube-scheduler-kubernetes-upgrade-973775
	
	
	==> coredns [2c1589f0bd01907b1133aa4281f92418e962696b4624c926f35ea1229c5fcb4a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [420e286dc7a4f68fdbaf399c142ce206f1f6807d7d07b0aa99b026f9247039a4] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/kubernetes: Trace[2140562880]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (24-Feb-2025 13:14:51.494) (total time: 14959ms):
	Trace[2140562880]: [14.959156931s] [14.959156931s] END
	[INFO] plugin/kubernetes: Trace[1940679109]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (24-Feb-2025 13:14:51.492) (total time: 14961ms):
	Trace[1940679109]: [14.961228288s] [14.961228288s] END
	[INFO] plugin/kubernetes: Trace[296172352]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (24-Feb-2025 13:14:51.491) (total time: 14962ms):
	Trace[296172352]: [14.962622655s] [14.962622655s] END
	
	
	==> coredns [852b4f28cf8fefeea4c488718f29f68bb96bf757e6a66cad517d22fe1e297e9c] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/kubernetes: Trace[248293436]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (24-Feb-2025 13:14:51.493) (total time: 14952ms):
	Trace[248293436]: [14.952317219s] [14.952317219s] END
	[INFO] plugin/kubernetes: Trace[954028921]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (24-Feb-2025 13:14:51.492) (total time: 14953ms):
	Trace[954028921]: [14.953686976s] [14.953686976s] END
	[INFO] plugin/kubernetes: Trace[1118130866]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (24-Feb-2025 13:14:51.491) (total time: 14957ms):
	Trace[1118130866]: [14.957992587s] [14.957992587s] END
	
	
	==> coredns [9059551d199e22cb6f11408cac909e1c898236743eea8cb810ac6a926efa2a2d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-973775
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-973775
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Feb 2025 13:14:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-973775
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Feb 2025 13:15:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Feb 2025 13:15:24 +0000   Mon, 24 Feb 2025 13:14:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Feb 2025 13:15:24 +0000   Mon, 24 Feb 2025 13:14:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Feb 2025 13:15:24 +0000   Mon, 24 Feb 2025 13:14:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Feb 2025 13:15:24 +0000   Mon, 24 Feb 2025 13:14:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.35
	  Hostname:    kubernetes-upgrade-973775
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d0f6b5ab58cc4799b27aaf48bea546b1
	  System UUID:                d0f6b5ab-58cc-4799-b27a-af48bea546b1
	  Boot ID:                    dfaf2a2d-5f2f-4252-bd40-ec8f3aab1488
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-bt28c                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     40s
	  kube-system                 coredns-668d6bf9bc-kxnpn                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     40s
	  kube-system                 etcd-kubernetes-upgrade-973775                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         47s
	  kube-system                 kube-apiserver-kubernetes-upgrade-973775             250m (12%)    0 (0%)      0 (0%)           0 (0%)         47s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-973775    200m (10%)    0 (0%)      0 (0%)           0 (0%)         47s
	  kube-system                 kube-proxy-g7vf8                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-scheduler-kubernetes-upgrade-973775             100m (5%)     0 (0%)      0 (0%)           0 (0%)         47s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 37s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  NodeHasNoDiskPressure    73s (x8 over 73s)  kubelet          Node kubernetes-upgrade-973775 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     73s (x7 over 73s)  kubelet          Node kubernetes-upgrade-973775 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  73s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  73s (x8 over 73s)  kubelet          Node kubernetes-upgrade-973775 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           40s                node-controller  Node kubernetes-upgrade-973775 event: Registered Node kubernetes-upgrade-973775 in Controller
	  Normal  Starting                 9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x8 over 9s)    kubelet          Node kubernetes-upgrade-973775 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 9s)    kubelet          Node kubernetes-upgrade-973775 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x7 over 9s)    kubelet          Node kubernetes-upgrade-973775 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2s                 node-controller  Node kubernetes-upgrade-973775 event: Registered Node kubernetes-upgrade-973775 in Controller
	
	
	==> dmesg <==
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Feb24 13:14] systemd-fstab-generator[560]: Ignoring "noauto" option for root device
	[  +0.068904] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.080294] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.202444] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.155607] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.331780] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +5.060679] systemd-fstab-generator[719]: Ignoring "noauto" option for root device
	[  +0.098472] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.004456] systemd-fstab-generator[843]: Ignoring "noauto" option for root device
	[ +13.700886] kauditd_printk_skb: 87 callbacks suppressed
	[ +14.100007] systemd-fstab-generator[1356]: Ignoring "noauto" option for root device
	[  +0.096723] kauditd_printk_skb: 10 callbacks suppressed
	[  +6.217385] kauditd_printk_skb: 12 callbacks suppressed
	[Feb24 13:15] systemd-fstab-generator[2367]: Ignoring "noauto" option for root device
	[  +0.115692] kauditd_printk_skb: 78 callbacks suppressed
	[  +0.084510] systemd-fstab-generator[2379]: Ignoring "noauto" option for root device
	[  +0.221073] systemd-fstab-generator[2393]: Ignoring "noauto" option for root device
	[  +0.171449] systemd-fstab-generator[2405]: Ignoring "noauto" option for root device
	[  +0.426418] systemd-fstab-generator[2470]: Ignoring "noauto" option for root device
	[  +3.213598] systemd-fstab-generator[2708]: Ignoring "noauto" option for root device
	[  +0.950563] kauditd_printk_skb: 137 callbacks suppressed
	[  +1.825152] systemd-fstab-generator[2923]: Ignoring "noauto" option for root device
	[  +4.674650] kauditd_printk_skb: 52 callbacks suppressed
	[  +1.624290] systemd-fstab-generator[3817]: Ignoring "noauto" option for root device
	
	
	==> etcd [57e03b3e1c9572b9f6864dce645f671e260f17049329a539f967144e6f72904d] <==
	{"level":"info","ts":"2025-02-24T13:15:21.133391Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-02-24T13:15:21.135668Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"856aea5c8f9c1f00","initial-advertise-peer-urls":["https://192.168.50.35:2380"],"listen-peer-urls":["https://192.168.50.35:2380"],"advertise-client-urls":["https://192.168.50.35:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.35:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-02-24T13:15:21.135725Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-02-24T13:15:21.135789Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.50.35:2380"}
	{"level":"info","ts":"2025-02-24T13:15:21.135813Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.50.35:2380"}
	{"level":"info","ts":"2025-02-24T13:15:21.136216Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856aea5c8f9c1f00 switched to configuration voters=(9613754037843009280)"}
	{"level":"info","ts":"2025-02-24T13:15:21.139272Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"da54fef36cd0983a","local-member-id":"856aea5c8f9c1f00","added-peer-id":"856aea5c8f9c1f00","added-peer-peer-urls":["https://192.168.50.35:2380"]}
	{"level":"info","ts":"2025-02-24T13:15:21.140029Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da54fef36cd0983a","local-member-id":"856aea5c8f9c1f00","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-24T13:15:21.142721Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-24T13:15:22.605516Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856aea5c8f9c1f00 is starting a new election at term 2"}
	{"level":"info","ts":"2025-02-24T13:15:22.605628Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856aea5c8f9c1f00 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-02-24T13:15:22.605674Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856aea5c8f9c1f00 received MsgPreVoteResp from 856aea5c8f9c1f00 at term 2"}
	{"level":"info","ts":"2025-02-24T13:15:22.605704Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856aea5c8f9c1f00 became candidate at term 3"}
	{"level":"info","ts":"2025-02-24T13:15:22.605736Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856aea5c8f9c1f00 received MsgVoteResp from 856aea5c8f9c1f00 at term 3"}
	{"level":"info","ts":"2025-02-24T13:15:22.605756Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856aea5c8f9c1f00 became leader at term 3"}
	{"level":"info","ts":"2025-02-24T13:15:22.605774Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 856aea5c8f9c1f00 elected leader 856aea5c8f9c1f00 at term 3"}
	{"level":"info","ts":"2025-02-24T13:15:22.609689Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"856aea5c8f9c1f00","local-member-attributes":"{Name:kubernetes-upgrade-973775 ClientURLs:[https://192.168.50.35:2379]}","request-path":"/0/members/856aea5c8f9c1f00/attributes","cluster-id":"da54fef36cd0983a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-02-24T13:15:22.610500Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-24T13:15:22.611201Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-24T13:15:22.611927Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.35:2379"}
	{"level":"info","ts":"2025-02-24T13:15:22.638504Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-24T13:15:22.645482Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-02-24T13:15:22.645543Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-02-24T13:15:22.646003Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-24T13:15:22.646686Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [b999ffc97fd56092e26ee7b6fe594f6d32869783d68352a9308cc2580e4ce17b] <==
	{"level":"warn","ts":"2025-02-24T13:14:47.832100Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-24T13:14:47.024839Z","time spent":"807.226347ms","remote":"127.0.0.1:49992","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":209,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/ttl-after-finished-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/ttl-after-finished-controller\" value_size:134 >> failure:<>"}
	{"level":"info","ts":"2025-02-24T13:14:47.832514Z","caller":"traceutil/trace.go:171","msg":"trace[1879024757] transaction","detail":"{read_only:false; response_revision:298; number_of_response:1; }","duration":"812.205706ms","start":"2025-02-24T13:14:47.020295Z","end":"2025-02-24T13:14:47.832501Z","steps":["trace[1879024757] 'process raft request'  (duration: 340.865058ms)","trace[1879024757] 'compare'  (duration: 470.394574ms)"],"step_count":2}
	{"level":"warn","ts":"2025-02-24T13:14:47.832728Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-24T13:14:47.020281Z","time spent":"812.393995ms","remote":"127.0.0.1:49986","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5875,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/etcd-kubernetes-upgrade-973775\" mod_revision:136 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-kubernetes-upgrade-973775\" value_size:5810 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-kubernetes-upgrade-973775\" > >"}
	{"level":"warn","ts":"2025-02-24T13:14:48.285382Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"184.477039ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2233949483335624417 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/root-ca-cert-publisher\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/root-ca-cert-publisher\" value_size:127 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-02-24T13:14:48.285717Z","caller":"traceutil/trace.go:171","msg":"trace[1644550279] transaction","detail":"{read_only:false; response_revision:301; number_of_response:1; }","duration":"440.424303ms","start":"2025-02-24T13:14:47.845279Z","end":"2025-02-24T13:14:48.285703Z","steps":["trace[1644550279] 'process raft request'  (duration: 440.343858ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-24T13:14:48.285776Z","caller":"traceutil/trace.go:171","msg":"trace[1055225857] transaction","detail":"{read_only:false; response_revision:300; number_of_response:1; }","duration":"440.848016ms","start":"2025-02-24T13:14:47.844907Z","end":"2025-02-24T13:14:48.285755Z","steps":["trace[1055225857] 'process raft request'  (duration: 255.947948ms)","trace[1055225857] 'compare'  (duration: 184.364765ms)"],"step_count":2}
	{"level":"warn","ts":"2025-02-24T13:14:48.285965Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-24T13:14:47.844896Z","time spent":"440.97893ms","remote":"127.0.0.1:49992","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":194,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/root-ca-cert-publisher\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/root-ca-cert-publisher\" value_size:127 >> failure:<>"}
	{"level":"warn","ts":"2025-02-24T13:14:48.285823Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-24T13:14:47.845266Z","time spent":"440.514546ms","remote":"127.0.0.1:49986","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4592,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-kubernetes-upgrade-973775\" mod_revision:130 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-kubernetes-upgrade-973775\" value_size:4517 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-kubernetes-upgrade-973775\" > >"}
	{"level":"warn","ts":"2025-02-24T13:14:48.743717Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.939098ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2233949483335624422 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-kubernetes-upgrade-973775\" mod_revision:131 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-kubernetes-upgrade-973775\" value_size:7111 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-kubernetes-upgrade-973775\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-02-24T13:14:48.743896Z","caller":"traceutil/trace.go:171","msg":"trace[1598338896] transaction","detail":"{read_only:false; response_revision:303; number_of_response:1; }","duration":"446.036606ms","start":"2025-02-24T13:14:48.297849Z","end":"2025-02-24T13:14:48.743886Z","steps":["trace[1598338896] 'process raft request'  (duration: 445.999015ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-24T13:14:48.743955Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-24T13:14:48.297836Z","time spent":"446.099529ms","remote":"127.0.0.1:49992","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":192,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/deployment-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/deployment-controller\" value_size:126 >> failure:<>"}
	{"level":"info","ts":"2025-02-24T13:14:48.744078Z","caller":"traceutil/trace.go:171","msg":"trace[14468598] transaction","detail":"{read_only:false; response_revision:302; number_of_response:1; }","duration":"446.767912ms","start":"2025-02-24T13:14:48.297294Z","end":"2025-02-24T13:14:48.744062Z","steps":["trace[14468598] 'process raft request'  (duration: 258.434577ms)","trace[14468598] 'compare'  (duration: 187.847946ms)"],"step_count":2}
	{"level":"warn","ts":"2025-02-24T13:14:48.744189Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-24T13:14:48.297282Z","time spent":"446.861411ms","remote":"127.0.0.1:49986","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":7186,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-kubernetes-upgrade-973775\" mod_revision:131 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-kubernetes-upgrade-973775\" value_size:7111 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-kubernetes-upgrade-973775\" > >"}
	{"level":"warn","ts":"2025-02-24T13:14:48.957162Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.54307ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/validatingadmissionpolicy-status-controller\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-24T13:14:48.957299Z","caller":"traceutil/trace.go:171","msg":"trace[762076635] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/validatingadmissionpolicy-status-controller; range_end:; response_count:0; response_revision:304; }","duration":"103.725003ms","start":"2025-02-24T13:14:48.853561Z","end":"2025-02-24T13:14:48.957286Z","steps":["trace[762076635] 'range keys from in-memory index tree'  (duration: 103.476376ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-24T13:15:06.438109Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-02-24T13:15:06.438251Z","caller":"embed/etcd.go:378","msg":"closing etcd server","name":"kubernetes-upgrade-973775","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.35:2380"],"advertise-client-urls":["https://192.168.50.35:2379"]}
	{"level":"warn","ts":"2025-02-24T13:15:06.438408Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-24T13:15:06.438594Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-24T13:15:06.531025Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.35:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-24T13:15:06.531172Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.35:2379: use of closed network connection"}
	{"level":"info","ts":"2025-02-24T13:15:06.531259Z","caller":"etcdserver/server.go:1543","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"856aea5c8f9c1f00","current-leader-member-id":"856aea5c8f9c1f00"}
	{"level":"info","ts":"2025-02-24T13:15:06.534331Z","caller":"embed/etcd.go:582","msg":"stopping serving peer traffic","address":"192.168.50.35:2380"}
	{"level":"info","ts":"2025-02-24T13:15:06.534596Z","caller":"embed/etcd.go:587","msg":"stopped serving peer traffic","address":"192.168.50.35:2380"}
	{"level":"info","ts":"2025-02-24T13:15:06.534644Z","caller":"embed/etcd.go:380","msg":"closed etcd server","name":"kubernetes-upgrade-973775","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.35:2380"],"advertise-client-urls":["https://192.168.50.35:2379"]}
	
	
	==> kernel <==
	 13:15:29 up 1 min,  0 users,  load average: 0.91, 0.30, 0.11
	Linux kubernetes-upgrade-973775 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0d6a90aa70656b62b463e8305f585feacd358be80c71b970c11e88c2599b0241] <==
	I0224 13:15:24.406917       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0224 13:15:24.417811       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0224 13:15:24.417862       1 policy_source.go:240] refreshing policies
	I0224 13:15:24.423876       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0224 13:15:24.433186       1 shared_informer.go:320] Caches are synced for configmaps
	I0224 13:15:24.436237       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0224 13:15:24.436442       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0224 13:15:24.437231       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0224 13:15:24.437313       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0224 13:15:24.437631       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0224 13:15:24.438965       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0224 13:15:24.439035       1 aggregator.go:171] initial CRD sync complete...
	I0224 13:15:24.439043       1 autoregister_controller.go:144] Starting autoregister controller
	I0224 13:15:24.439048       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0224 13:15:24.439053       1 cache.go:39] Caches are synced for autoregister controller
	E0224 13:15:24.455530       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0224 13:15:24.483476       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0224 13:15:24.534340       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0224 13:15:25.247033       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0224 13:15:25.333125       1 controller.go:615] quota admission added evaluator for: endpoints
	I0224 13:15:26.269954       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0224 13:15:26.320726       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0224 13:15:26.369083       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0224 13:15:26.380371       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0224 13:15:27.873921       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [2bb133f6a21608fac23e96e7920a1a8c807f3c1e0fa0c347e73e4802ef68971c] <==
	I0224 13:14:43.204070       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0224 13:14:43.295044       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0224 13:14:43.411591       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0224 13:14:43.434398       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.50.35]
	I0224 13:14:43.436078       1 controller.go:615] quota admission added evaluator for: endpoints
	I0224 13:14:43.445672       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0224 13:14:43.514153       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0224 13:14:44.001736       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0224 13:14:44.023599       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0224 13:14:44.048004       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0224 13:14:49.210654       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0224 13:14:49.272160       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0224 13:15:06.443292       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0224 13:15:06.455947       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:15:06.457078       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:15:06.457695       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:15:06.458765       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:15:06.461980       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:15:06.462089       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:15:06.462602       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:15:06.462693       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0224 13:15:06.458395       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0224 13:15:06.464680       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:15:06.465041       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:15:06.467095       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [40dc538b8106d0cb1e0bd6e6277ab309570efbcdd379ae63a710d1dcd86a3863] <==
	I0224 13:14:49.160756       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0224 13:14:49.168216       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0224 13:14:49.168992       1 shared_informer.go:320] Caches are synced for PVC protection
	I0224 13:14:49.169525       1 shared_informer.go:320] Caches are synced for expand
	I0224 13:14:49.169573       1 shared_informer.go:320] Caches are synced for persistent volume
	I0224 13:14:49.169758       1 shared_informer.go:320] Caches are synced for resource quota
	I0224 13:14:49.173521       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0224 13:14:49.173644       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-973775"
	I0224 13:14:49.177533       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0224 13:14:49.177616       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0224 13:14:49.179929       1 shared_informer.go:320] Caches are synced for GC
	I0224 13:14:49.184259       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0224 13:14:49.190855       1 shared_informer.go:320] Caches are synced for crt configmap
	I0224 13:14:49.207625       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="kubernetes-upgrade-973775" podCIDRs=["10.244.0.0/24"]
	I0224 13:14:49.207736       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-973775"
	I0224 13:14:49.207816       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-973775"
	I0224 13:14:49.207934       1 shared_informer.go:320] Caches are synced for garbage collector
	I0224 13:14:49.327496       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-973775"
	I0224 13:14:49.618380       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="329.846182ms"
	I0224 13:14:49.637980       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="18.685905ms"
	I0224 13:14:49.638405       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="248.825µs"
	I0224 13:14:49.649190       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="49.231µs"
	I0224 13:14:52.044461       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="112.287µs"
	I0224 13:14:52.148403       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-973775"
	I0224 13:14:52.156230       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="67.376µs"
	
	
	==> kube-controller-manager [9cf3945c2206349d3ccfe775dee4c899010461d23541c048697d7648c4eb9750] <==
	I0224 13:15:27.674844       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-973775"
	I0224 13:15:27.674944       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0224 13:15:27.675084       1 shared_informer.go:320] Caches are synced for attach detach
	I0224 13:15:27.682594       1 shared_informer.go:320] Caches are synced for ephemeral
	I0224 13:15:27.685246       1 shared_informer.go:320] Caches are synced for job
	I0224 13:15:27.685280       1 shared_informer.go:320] Caches are synced for resource quota
	I0224 13:15:27.687760       1 shared_informer.go:320] Caches are synced for cronjob
	I0224 13:15:27.687793       1 shared_informer.go:320] Caches are synced for expand
	I0224 13:15:27.690405       1 shared_informer.go:320] Caches are synced for GC
	I0224 13:15:27.691628       1 shared_informer.go:320] Caches are synced for daemon sets
	I0224 13:15:27.694719       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0224 13:15:27.694894       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="104.961µs"
	I0224 13:15:27.704182       1 shared_informer.go:320] Caches are synced for HPA
	I0224 13:15:27.707543       1 shared_informer.go:320] Caches are synced for garbage collector
	I0224 13:15:27.713629       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0224 13:15:27.713768       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0224 13:15:27.713829       1 shared_informer.go:320] Caches are synced for PV protection
	I0224 13:15:27.713847       1 shared_informer.go:320] Caches are synced for deployment
	I0224 13:15:27.713859       1 shared_informer.go:320] Caches are synced for disruption
	I0224 13:15:27.727842       1 shared_informer.go:320] Caches are synced for endpoint
	I0224 13:15:27.735022       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0224 13:15:27.735159       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-973775"
	I0224 13:15:27.748971       1 shared_informer.go:320] Caches are synced for garbage collector
	I0224 13:15:27.749025       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0224 13:15:27.749035       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [902bfe7dea076fd5b72c3a66b7fd9f85e28fed6d96ac3b27fddb5a2a958a362a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0224 13:15:25.575739       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0224 13:15:25.609196       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.50.35"]
	E0224 13:15:25.609257       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0224 13:15:25.732370       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0224 13:15:25.732404       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0224 13:15:25.732475       1 server_linux.go:170] "Using iptables Proxier"
	I0224 13:15:25.737603       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0224 13:15:25.737919       1 server.go:497] "Version info" version="v1.32.2"
	I0224 13:15:25.737951       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0224 13:15:25.745407       1 config.go:199] "Starting service config controller"
	I0224 13:15:25.745617       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0224 13:15:25.745658       1 config.go:105] "Starting endpoint slice config controller"
	I0224 13:15:25.745675       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0224 13:15:25.745711       1 config.go:329] "Starting node config controller"
	I0224 13:15:25.745728       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0224 13:15:25.847939       1 shared_informer.go:320] Caches are synced for node config
	I0224 13:15:25.847977       1 shared_informer.go:320] Caches are synced for service config
	I0224 13:15:25.847989       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [fdbec94c61673d153fa1a6e8c90f2c2a92cc23e153535470ba4b19f73bdfa26a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0224 13:14:51.695494       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0224 13:14:51.723299       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.50.35"]
	E0224 13:14:51.723635       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0224 13:14:51.773595       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0224 13:14:51.773669       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0224 13:14:51.773701       1 server_linux.go:170] "Using iptables Proxier"
	I0224 13:14:51.777616       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0224 13:14:51.778936       1 server.go:497] "Version info" version="v1.32.2"
	I0224 13:14:51.779000       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0224 13:14:51.782986       1 config.go:199] "Starting service config controller"
	I0224 13:14:51.783554       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0224 13:14:51.783647       1 config.go:105] "Starting endpoint slice config controller"
	I0224 13:14:51.783668       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0224 13:14:51.785799       1 config.go:329] "Starting node config controller"
	I0224 13:14:51.785836       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0224 13:14:51.884807       1 shared_informer.go:320] Caches are synced for service config
	I0224 13:14:51.884808       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0224 13:14:51.886198       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [74aa004fcd6affa1000a3e1cf470967e20815f56e85b6928b10f42ae5c41325a] <==
	E0224 13:14:42.119278       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0224 13:14:42.150893       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0224 13:14:42.150969       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0224 13:14:42.199692       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0224 13:14:42.199761       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0224 13:14:42.208001       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0224 13:14:42.208271       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0224 13:14:42.230892       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0224 13:14:42.230952       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0224 13:14:42.231409       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0224 13:14:42.231511       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0224 13:14:42.234025       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0224 13:14:42.234077       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0224 13:14:42.250906       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0224 13:14:42.250992       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0224 13:14:42.268083       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0224 13:14:42.268164       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0224 13:14:42.272897       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0224 13:14:42.272979       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0224 13:14:42.373073       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0224 13:14:42.373316       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0224 13:14:42.427836       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0224 13:14:42.427920       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0224 13:14:44.132951       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0224 13:15:06.463751       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [ffd4c3090b23e638630257de27c8ea19e9a122513080d9fcb474d329777d397f] <==
	I0224 13:15:22.366391       1 serving.go:386] Generated self-signed cert in-memory
	W0224 13:15:24.303226       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0224 13:15:24.303394       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0224 13:15:24.303552       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0224 13:15:24.303698       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0224 13:15:24.434986       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0224 13:15:24.436767       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0224 13:15:24.439821       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0224 13:15:24.439828       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0224 13:15:24.439977       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0224 13:15:24.439852       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0224 13:15:24.540600       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 24 13:15:22 kubernetes-upgrade-973775 kubelet[2930]: E0224 13:15:22.666528    2930 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"kubernetes-upgrade-973775\" not found" node="kubernetes-upgrade-973775"
	Feb 24 13:15:22 kubernetes-upgrade-973775 kubelet[2930]: E0224 13:15:22.667053    2930 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"kubernetes-upgrade-973775\" not found" node="kubernetes-upgrade-973775"
	Feb 24 13:15:22 kubernetes-upgrade-973775 kubelet[2930]: E0224 13:15:22.667619    2930 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"kubernetes-upgrade-973775\" not found" node="kubernetes-upgrade-973775"
	Feb 24 13:15:23 kubernetes-upgrade-973775 kubelet[2930]: E0224 13:15:23.673804    2930 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"kubernetes-upgrade-973775\" not found" node="kubernetes-upgrade-973775"
	Feb 24 13:15:23 kubernetes-upgrade-973775 kubelet[2930]: E0224 13:15:23.674316    2930 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"kubernetes-upgrade-973775\" not found" node="kubernetes-upgrade-973775"
	Feb 24 13:15:23 kubernetes-upgrade-973775 kubelet[2930]: E0224 13:15:23.675320    2930 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"kubernetes-upgrade-973775\" not found" node="kubernetes-upgrade-973775"
	Feb 24 13:15:24 kubernetes-upgrade-973775 kubelet[2930]: I0224 13:15:24.428018    2930 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-kubernetes-upgrade-973775"
	Feb 24 13:15:24 kubernetes-upgrade-973775 kubelet[2930]: E0224 13:15:24.460827    2930 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-kubernetes-upgrade-973775\" already exists" pod="kube-system/etcd-kubernetes-upgrade-973775"
	Feb 24 13:15:24 kubernetes-upgrade-973775 kubelet[2930]: I0224 13:15:24.460999    2930 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-kubernetes-upgrade-973775"
	Feb 24 13:15:24 kubernetes-upgrade-973775 kubelet[2930]: E0224 13:15:24.480998    2930 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-kubernetes-upgrade-973775\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-973775"
	Feb 24 13:15:24 kubernetes-upgrade-973775 kubelet[2930]: I0224 13:15:24.481125    2930 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-kubernetes-upgrade-973775"
	Feb 24 13:15:24 kubernetes-upgrade-973775 kubelet[2930]: E0224 13:15:24.491828    2930 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-kubernetes-upgrade-973775\" already exists" pod="kube-system/kube-controller-manager-kubernetes-upgrade-973775"
	Feb 24 13:15:24 kubernetes-upgrade-973775 kubelet[2930]: I0224 13:15:24.491960    2930 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-kubernetes-upgrade-973775"
	Feb 24 13:15:24 kubernetes-upgrade-973775 kubelet[2930]: E0224 13:15:24.503055    2930 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-kubernetes-upgrade-973775\" already exists" pod="kube-system/kube-scheduler-kubernetes-upgrade-973775"
	Feb 24 13:15:24 kubernetes-upgrade-973775 kubelet[2930]: I0224 13:15:24.503252    2930 apiserver.go:52] "Watching apiserver"
	Feb 24 13:15:24 kubernetes-upgrade-973775 kubelet[2930]: I0224 13:15:24.525380    2930 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Feb 24 13:15:24 kubernetes-upgrade-973775 kubelet[2930]: I0224 13:15:24.525877    2930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cfc61df1-27c0-42a0-9160-97066d10ef0b-lib-modules\") pod \"kube-proxy-g7vf8\" (UID: \"cfc61df1-27c0-42a0-9160-97066d10ef0b\") " pod="kube-system/kube-proxy-g7vf8"
	Feb 24 13:15:24 kubernetes-upgrade-973775 kubelet[2930]: I0224 13:15:24.525987    2930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cfc61df1-27c0-42a0-9160-97066d10ef0b-xtables-lock\") pod \"kube-proxy-g7vf8\" (UID: \"cfc61df1-27c0-42a0-9160-97066d10ef0b\") " pod="kube-system/kube-proxy-g7vf8"
	Feb 24 13:15:24 kubernetes-upgrade-973775 kubelet[2930]: I0224 13:15:24.526026    2930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3f6f5758-7842-400d-9bc7-d4e2d0226484-tmp\") pod \"storage-provisioner\" (UID: \"3f6f5758-7842-400d-9bc7-d4e2d0226484\") " pod="kube-system/storage-provisioner"
	Feb 24 13:15:24 kubernetes-upgrade-973775 kubelet[2930]: I0224 13:15:24.544220    2930 kubelet_node_status.go:125] "Node was previously registered" node="kubernetes-upgrade-973775"
	Feb 24 13:15:24 kubernetes-upgrade-973775 kubelet[2930]: I0224 13:15:24.544515    2930 kubelet_node_status.go:79] "Successfully registered node" node="kubernetes-upgrade-973775"
	Feb 24 13:15:24 kubernetes-upgrade-973775 kubelet[2930]: I0224 13:15:24.544654    2930 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Feb 24 13:15:24 kubernetes-upgrade-973775 kubelet[2930]: I0224 13:15:24.545950    2930 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Feb 24 13:15:27 kubernetes-upgrade-973775 kubelet[2930]: I0224 13:15:27.779877    2930 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Feb 24 13:15:27 kubernetes-upgrade-973775 kubelet[2930]: I0224 13:15:27.780302    2930 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [82d0212232a15e750990faddc172f1a8b990240fbd1c884ce01dc7260d3f8a51] <==
	I0224 13:14:51.464832       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	
	
	==> storage-provisioner [dd199914cc9898219762ee7fd699d705273da1752499fd4378f607826c63a976] <==
	I0224 13:15:25.259771       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0224 13:15:25.300045       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0224 13:15:25.300101       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0224 13:15:25.342741       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0224 13:15:25.342952       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-973775_ce691c2d-1244-4e46-b960-9eb80f65635f!
	I0224 13:15:25.343242       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5cbce1ec-5e3b-4d5d-a557-08892ada841f", APIVersion:"v1", ResourceVersion:"416", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-973775_ce691c2d-1244-4e46-b960-9eb80f65635f became leader
	I0224 13:15:25.445658       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-973775_ce691c2d-1244-4e46-b960-9eb80f65635f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-973775 -n kubernetes-upgrade-973775
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-973775 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-973775" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-973775
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-973775: (1.374928136s)
--- FAIL: TestKubernetesUpgrade (444.14s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (65.85s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-290993 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-290993 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m1.66747712s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-290993] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20451
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20451-887294/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-887294/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-290993" primary control-plane node in "pause-290993" cluster
	* Updating the running kvm2 "pause-290993" VM ...
	* Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-290993" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0224 13:10:17.884715  933673 out.go:345] Setting OutFile to fd 1 ...
	I0224 13:10:17.884831  933673 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 13:10:17.884836  933673 out.go:358] Setting ErrFile to fd 2...
	I0224 13:10:17.884840  933673 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 13:10:17.885075  933673 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-887294/.minikube/bin
	I0224 13:10:17.885703  933673 out.go:352] Setting JSON to false
	I0224 13:10:17.886725  933673 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10359,"bootTime":1740392259,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0224 13:10:17.886794  933673 start.go:139] virtualization: kvm guest
	I0224 13:10:17.889486  933673 out.go:177] * [pause-290993] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0224 13:10:17.890995  933673 notify.go:220] Checking for updates...
	I0224 13:10:17.891015  933673 out.go:177]   - MINIKUBE_LOCATION=20451
	I0224 13:10:17.892674  933673 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 13:10:17.894261  933673 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20451-887294/kubeconfig
	I0224 13:10:17.895699  933673 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-887294/.minikube
	I0224 13:10:17.897380  933673 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0224 13:10:17.898961  933673 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0224 13:10:17.901044  933673 config.go:182] Loaded profile config "pause-290993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0224 13:10:17.901747  933673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:10:17.901808  933673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:10:17.924553  933673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33405
	I0224 13:10:17.925006  933673 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:10:17.925783  933673 main.go:141] libmachine: Using API Version  1
	I0224 13:10:17.925817  933673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:10:17.926186  933673 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:10:17.926466  933673 main.go:141] libmachine: (pause-290993) Calling .DriverName
	I0224 13:10:17.926790  933673 driver.go:394] Setting default libvirt URI to qemu:///system
	I0224 13:10:17.927105  933673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:10:17.927150  933673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:10:17.943453  933673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40339
	I0224 13:10:17.943861  933673 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:10:17.944456  933673 main.go:141] libmachine: Using API Version  1
	I0224 13:10:17.944481  933673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:10:17.944863  933673 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:10:17.945091  933673 main.go:141] libmachine: (pause-290993) Calling .DriverName
	I0224 13:10:17.983489  933673 out.go:177] * Using the kvm2 driver based on existing profile
	I0224 13:10:17.984906  933673 start.go:297] selected driver: kvm2
	I0224 13:10:17.984932  933673 start.go:901] validating driver "kvm2" against &{Name:pause-290993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:pa
use-290993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.181 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm
:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0224 13:10:17.985227  933673 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0224 13:10:17.985615  933673 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 13:10:17.985727  933673 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20451-887294/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0224 13:10:18.002093  933673 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0224 13:10:18.002823  933673 cni.go:84] Creating CNI manager for ""
	I0224 13:10:18.002880  933673 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0224 13:10:18.002944  933673 start.go:340] cluster config:
	{Name:pause-290993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:pause-290993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.181 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-alias
es:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0224 13:10:18.003109  933673 iso.go:125] acquiring lock: {Name:mk57408cca66a96a13d93cda43cdfac6e61aef3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 13:10:18.005042  933673 out.go:177] * Starting "pause-290993" primary control-plane node in "pause-290993" cluster
	I0224 13:10:18.006524  933673 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0224 13:10:18.006577  933673 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20451-887294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0224 13:10:18.006592  933673 cache.go:56] Caching tarball of preloaded images
	I0224 13:10:18.006693  933673 preload.go:172] Found /home/jenkins/minikube-integration/20451-887294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0224 13:10:18.006704  933673 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0224 13:10:18.006856  933673 profile.go:143] Saving config to /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/pause-290993/config.json ...
	I0224 13:10:18.007096  933673 start.go:360] acquireMachinesLock for pause-290993: {Name:mk023761b01bb629a1acd40bc8104cc517b0e15b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0224 13:10:24.366780  933673 start.go:364] duration metric: took 6.359609099s to acquireMachinesLock for "pause-290993"
	I0224 13:10:24.366852  933673 start.go:96] Skipping create...Using existing machine configuration
	I0224 13:10:24.366860  933673 fix.go:54] fixHost starting: 
	I0224 13:10:24.367335  933673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:10:24.367397  933673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:10:24.385884  933673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45461
	I0224 13:10:24.386416  933673 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:10:24.387052  933673 main.go:141] libmachine: Using API Version  1
	I0224 13:10:24.387082  933673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:10:24.387487  933673 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:10:24.387679  933673 main.go:141] libmachine: (pause-290993) Calling .DriverName
	I0224 13:10:24.387816  933673 main.go:141] libmachine: (pause-290993) Calling .GetState
	I0224 13:10:24.389549  933673 fix.go:112] recreateIfNeeded on pause-290993: state=Running err=<nil>
	W0224 13:10:24.389593  933673 fix.go:138] unexpected machine state, will restart: <nil>
	I0224 13:10:24.391860  933673 out.go:177] * Updating the running kvm2 "pause-290993" VM ...
	I0224 13:10:24.393290  933673 machine.go:93] provisionDockerMachine start ...
	I0224 13:10:24.393338  933673 main.go:141] libmachine: (pause-290993) Calling .DriverName
	I0224 13:10:24.393590  933673 main.go:141] libmachine: (pause-290993) Calling .GetSSHHostname
	I0224 13:10:24.396437  933673 main.go:141] libmachine: (pause-290993) DBG | domain pause-290993 has defined MAC address 52:54:00:1b:f4:6e in network mk-pause-290993
	I0224 13:10:24.397037  933673 main.go:141] libmachine: (pause-290993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:f4:6e", ip: ""} in network mk-pause-290993: {Iface:virbr4 ExpiryTime:2025-02-24 14:09:37 +0000 UTC Type:0 Mac:52:54:00:1b:f4:6e Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:pause-290993 Clientid:01:52:54:00:1b:f4:6e}
	I0224 13:10:24.397065  933673 main.go:141] libmachine: (pause-290993) DBG | domain pause-290993 has defined IP address 192.168.72.181 and MAC address 52:54:00:1b:f4:6e in network mk-pause-290993
	I0224 13:10:24.397281  933673 main.go:141] libmachine: (pause-290993) Calling .GetSSHPort
	I0224 13:10:24.397519  933673 main.go:141] libmachine: (pause-290993) Calling .GetSSHKeyPath
	I0224 13:10:24.397710  933673 main.go:141] libmachine: (pause-290993) Calling .GetSSHKeyPath
	I0224 13:10:24.397878  933673 main.go:141] libmachine: (pause-290993) Calling .GetSSHUsername
	I0224 13:10:24.398037  933673 main.go:141] libmachine: Using SSH client type: native
	I0224 13:10:24.398275  933673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.72.181 22 <nil> <nil>}
	I0224 13:10:24.398289  933673 main.go:141] libmachine: About to run SSH command:
	hostname
	I0224 13:10:24.511103  933673 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-290993
	
	I0224 13:10:24.511137  933673 main.go:141] libmachine: (pause-290993) Calling .GetMachineName
	I0224 13:10:24.511393  933673 buildroot.go:166] provisioning hostname "pause-290993"
	I0224 13:10:24.511425  933673 main.go:141] libmachine: (pause-290993) Calling .GetMachineName
	I0224 13:10:24.511606  933673 main.go:141] libmachine: (pause-290993) Calling .GetSSHHostname
	I0224 13:10:24.514386  933673 main.go:141] libmachine: (pause-290993) DBG | domain pause-290993 has defined MAC address 52:54:00:1b:f4:6e in network mk-pause-290993
	I0224 13:10:24.514783  933673 main.go:141] libmachine: (pause-290993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:f4:6e", ip: ""} in network mk-pause-290993: {Iface:virbr4 ExpiryTime:2025-02-24 14:09:37 +0000 UTC Type:0 Mac:52:54:00:1b:f4:6e Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:pause-290993 Clientid:01:52:54:00:1b:f4:6e}
	I0224 13:10:24.514819  933673 main.go:141] libmachine: (pause-290993) DBG | domain pause-290993 has defined IP address 192.168.72.181 and MAC address 52:54:00:1b:f4:6e in network mk-pause-290993
	I0224 13:10:24.515024  933673 main.go:141] libmachine: (pause-290993) Calling .GetSSHPort
	I0224 13:10:24.515292  933673 main.go:141] libmachine: (pause-290993) Calling .GetSSHKeyPath
	I0224 13:10:24.515470  933673 main.go:141] libmachine: (pause-290993) Calling .GetSSHKeyPath
	I0224 13:10:24.515636  933673 main.go:141] libmachine: (pause-290993) Calling .GetSSHUsername
	I0224 13:10:24.515817  933673 main.go:141] libmachine: Using SSH client type: native
	I0224 13:10:24.516065  933673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.72.181 22 <nil> <nil>}
	I0224 13:10:24.516086  933673 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-290993 && echo "pause-290993" | sudo tee /etc/hostname
	I0224 13:10:24.641657  933673 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-290993
	
	I0224 13:10:24.641693  933673 main.go:141] libmachine: (pause-290993) Calling .GetSSHHostname
	I0224 13:10:24.644682  933673 main.go:141] libmachine: (pause-290993) DBG | domain pause-290993 has defined MAC address 52:54:00:1b:f4:6e in network mk-pause-290993
	I0224 13:10:24.645091  933673 main.go:141] libmachine: (pause-290993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:f4:6e", ip: ""} in network mk-pause-290993: {Iface:virbr4 ExpiryTime:2025-02-24 14:09:37 +0000 UTC Type:0 Mac:52:54:00:1b:f4:6e Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:pause-290993 Clientid:01:52:54:00:1b:f4:6e}
	I0224 13:10:24.645134  933673 main.go:141] libmachine: (pause-290993) DBG | domain pause-290993 has defined IP address 192.168.72.181 and MAC address 52:54:00:1b:f4:6e in network mk-pause-290993
	I0224 13:10:24.645412  933673 main.go:141] libmachine: (pause-290993) Calling .GetSSHPort
	I0224 13:10:24.645754  933673 main.go:141] libmachine: (pause-290993) Calling .GetSSHKeyPath
	I0224 13:10:24.645992  933673 main.go:141] libmachine: (pause-290993) Calling .GetSSHKeyPath
	I0224 13:10:24.646193  933673 main.go:141] libmachine: (pause-290993) Calling .GetSSHUsername
	I0224 13:10:24.646364  933673 main.go:141] libmachine: Using SSH client type: native
	I0224 13:10:24.646544  933673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.72.181 22 <nil> <nil>}
	I0224 13:10:24.646560  933673 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-290993' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-290993/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-290993' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0224 13:10:24.754500  933673 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0224 13:10:24.754533  933673 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20451-887294/.minikube CaCertPath:/home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20451-887294/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20451-887294/.minikube}
	I0224 13:10:24.754582  933673 buildroot.go:174] setting up certificates
	I0224 13:10:24.754597  933673 provision.go:84] configureAuth start
	I0224 13:10:24.754616  933673 main.go:141] libmachine: (pause-290993) Calling .GetMachineName
	I0224 13:10:24.755003  933673 main.go:141] libmachine: (pause-290993) Calling .GetIP
	I0224 13:10:24.757850  933673 main.go:141] libmachine: (pause-290993) DBG | domain pause-290993 has defined MAC address 52:54:00:1b:f4:6e in network mk-pause-290993
	I0224 13:10:24.758298  933673 main.go:141] libmachine: (pause-290993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:f4:6e", ip: ""} in network mk-pause-290993: {Iface:virbr4 ExpiryTime:2025-02-24 14:09:37 +0000 UTC Type:0 Mac:52:54:00:1b:f4:6e Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:pause-290993 Clientid:01:52:54:00:1b:f4:6e}
	I0224 13:10:24.758325  933673 main.go:141] libmachine: (pause-290993) DBG | domain pause-290993 has defined IP address 192.168.72.181 and MAC address 52:54:00:1b:f4:6e in network mk-pause-290993
	I0224 13:10:24.758522  933673 main.go:141] libmachine: (pause-290993) Calling .GetSSHHostname
	I0224 13:10:24.761074  933673 main.go:141] libmachine: (pause-290993) DBG | domain pause-290993 has defined MAC address 52:54:00:1b:f4:6e in network mk-pause-290993
	I0224 13:10:24.761450  933673 main.go:141] libmachine: (pause-290993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:f4:6e", ip: ""} in network mk-pause-290993: {Iface:virbr4 ExpiryTime:2025-02-24 14:09:37 +0000 UTC Type:0 Mac:52:54:00:1b:f4:6e Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:pause-290993 Clientid:01:52:54:00:1b:f4:6e}
	I0224 13:10:24.761479  933673 main.go:141] libmachine: (pause-290993) DBG | domain pause-290993 has defined IP address 192.168.72.181 and MAC address 52:54:00:1b:f4:6e in network mk-pause-290993
	I0224 13:10:24.761654  933673 provision.go:143] copyHostCerts
	I0224 13:10:24.761737  933673 exec_runner.go:144] found /home/jenkins/minikube-integration/20451-887294/.minikube/ca.pem, removing ...
	I0224 13:10:24.761752  933673 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20451-887294/.minikube/ca.pem
	I0224 13:10:24.761822  933673 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20451-887294/.minikube/ca.pem (1082 bytes)
	I0224 13:10:24.761954  933673 exec_runner.go:144] found /home/jenkins/minikube-integration/20451-887294/.minikube/cert.pem, removing ...
	I0224 13:10:24.761967  933673 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20451-887294/.minikube/cert.pem
	I0224 13:10:24.761993  933673 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20451-887294/.minikube/cert.pem (1123 bytes)
	I0224 13:10:24.762077  933673 exec_runner.go:144] found /home/jenkins/minikube-integration/20451-887294/.minikube/key.pem, removing ...
	I0224 13:10:24.762088  933673 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20451-887294/.minikube/key.pem
	I0224 13:10:24.762115  933673 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20451-887294/.minikube/key.pem (1679 bytes)
	I0224 13:10:24.762197  933673 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20451-887294/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca-key.pem org=jenkins.pause-290993 san=[127.0.0.1 192.168.72.181 localhost minikube pause-290993]
	I0224 13:10:24.917401  933673 provision.go:177] copyRemoteCerts
	I0224 13:10:24.917478  933673 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0224 13:10:24.917522  933673 main.go:141] libmachine: (pause-290993) Calling .GetSSHHostname
	I0224 13:10:24.921071  933673 main.go:141] libmachine: (pause-290993) DBG | domain pause-290993 has defined MAC address 52:54:00:1b:f4:6e in network mk-pause-290993
	I0224 13:10:24.921558  933673 main.go:141] libmachine: (pause-290993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:f4:6e", ip: ""} in network mk-pause-290993: {Iface:virbr4 ExpiryTime:2025-02-24 14:09:37 +0000 UTC Type:0 Mac:52:54:00:1b:f4:6e Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:pause-290993 Clientid:01:52:54:00:1b:f4:6e}
	I0224 13:10:24.921597  933673 main.go:141] libmachine: (pause-290993) DBG | domain pause-290993 has defined IP address 192.168.72.181 and MAC address 52:54:00:1b:f4:6e in network mk-pause-290993
	I0224 13:10:24.921769  933673 main.go:141] libmachine: (pause-290993) Calling .GetSSHPort
	I0224 13:10:24.921994  933673 main.go:141] libmachine: (pause-290993) Calling .GetSSHKeyPath
	I0224 13:10:24.922206  933673 main.go:141] libmachine: (pause-290993) Calling .GetSSHUsername
	I0224 13:10:24.922378  933673 sshutil.go:53] new ssh client: &{IP:192.168.72.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/pause-290993/id_rsa Username:docker}
	I0224 13:10:25.010214  933673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0224 13:10:25.042323  933673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0224 13:10:25.080701  933673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0224 13:10:25.117266  933673 provision.go:87] duration metric: took 362.647793ms to configureAuth
	I0224 13:10:25.117354  933673 buildroot.go:189] setting minikube options for container-runtime
	I0224 13:10:25.117612  933673 config.go:182] Loaded profile config "pause-290993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0224 13:10:25.117720  933673 main.go:141] libmachine: (pause-290993) Calling .GetSSHHostname
	I0224 13:10:25.120736  933673 main.go:141] libmachine: (pause-290993) DBG | domain pause-290993 has defined MAC address 52:54:00:1b:f4:6e in network mk-pause-290993
	I0224 13:10:25.121340  933673 main.go:141] libmachine: (pause-290993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:f4:6e", ip: ""} in network mk-pause-290993: {Iface:virbr4 ExpiryTime:2025-02-24 14:09:37 +0000 UTC Type:0 Mac:52:54:00:1b:f4:6e Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:pause-290993 Clientid:01:52:54:00:1b:f4:6e}
	I0224 13:10:25.121388  933673 main.go:141] libmachine: (pause-290993) DBG | domain pause-290993 has defined IP address 192.168.72.181 and MAC address 52:54:00:1b:f4:6e in network mk-pause-290993
	I0224 13:10:25.121644  933673 main.go:141] libmachine: (pause-290993) Calling .GetSSHPort
	I0224 13:10:25.121852  933673 main.go:141] libmachine: (pause-290993) Calling .GetSSHKeyPath
	I0224 13:10:25.122009  933673 main.go:141] libmachine: (pause-290993) Calling .GetSSHKeyPath
	I0224 13:10:25.122132  933673 main.go:141] libmachine: (pause-290993) Calling .GetSSHUsername
	I0224 13:10:25.122305  933673 main.go:141] libmachine: Using SSH client type: native
	I0224 13:10:25.122515  933673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.72.181 22 <nil> <nil>}
	I0224 13:10:25.122532  933673 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0224 13:10:30.682099  933673 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0224 13:10:30.682132  933673 machine.go:96] duration metric: took 6.288824119s to provisionDockerMachine
	I0224 13:10:30.682147  933673 start.go:293] postStartSetup for "pause-290993" (driver="kvm2")
	I0224 13:10:30.682161  933673 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0224 13:10:30.682182  933673 main.go:141] libmachine: (pause-290993) Calling .DriverName
	I0224 13:10:30.682651  933673 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0224 13:10:30.682693  933673 main.go:141] libmachine: (pause-290993) Calling .GetSSHHostname
	I0224 13:10:30.685771  933673 main.go:141] libmachine: (pause-290993) DBG | domain pause-290993 has defined MAC address 52:54:00:1b:f4:6e in network mk-pause-290993
	I0224 13:10:30.686215  933673 main.go:141] libmachine: (pause-290993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:f4:6e", ip: ""} in network mk-pause-290993: {Iface:virbr4 ExpiryTime:2025-02-24 14:09:37 +0000 UTC Type:0 Mac:52:54:00:1b:f4:6e Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:pause-290993 Clientid:01:52:54:00:1b:f4:6e}
	I0224 13:10:30.686245  933673 main.go:141] libmachine: (pause-290993) DBG | domain pause-290993 has defined IP address 192.168.72.181 and MAC address 52:54:00:1b:f4:6e in network mk-pause-290993
	I0224 13:10:30.686460  933673 main.go:141] libmachine: (pause-290993) Calling .GetSSHPort
	I0224 13:10:30.686679  933673 main.go:141] libmachine: (pause-290993) Calling .GetSSHKeyPath
	I0224 13:10:30.686871  933673 main.go:141] libmachine: (pause-290993) Calling .GetSSHUsername
	I0224 13:10:30.686991  933673 sshutil.go:53] new ssh client: &{IP:192.168.72.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/pause-290993/id_rsa Username:docker}
	I0224 13:10:30.770996  933673 ssh_runner.go:195] Run: cat /etc/os-release
	I0224 13:10:30.775953  933673 info.go:137] Remote host: Buildroot 2023.02.9
	I0224 13:10:30.775987  933673 filesync.go:126] Scanning /home/jenkins/minikube-integration/20451-887294/.minikube/addons for local assets ...
	I0224 13:10:30.776054  933673 filesync.go:126] Scanning /home/jenkins/minikube-integration/20451-887294/.minikube/files for local assets ...
	I0224 13:10:30.776131  933673 filesync.go:149] local asset: /home/jenkins/minikube-integration/20451-887294/.minikube/files/etc/ssl/certs/8945642.pem -> 8945642.pem in /etc/ssl/certs
	I0224 13:10:30.776240  933673 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0224 13:10:30.788949  933673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/files/etc/ssl/certs/8945642.pem --> /etc/ssl/certs/8945642.pem (1708 bytes)
	I0224 13:10:30.817329  933673 start.go:296] duration metric: took 135.143754ms for postStartSetup
	I0224 13:10:30.817388  933673 fix.go:56] duration metric: took 6.450528356s for fixHost
	I0224 13:10:30.817433  933673 main.go:141] libmachine: (pause-290993) Calling .GetSSHHostname
	I0224 13:10:30.820269  933673 main.go:141] libmachine: (pause-290993) DBG | domain pause-290993 has defined MAC address 52:54:00:1b:f4:6e in network mk-pause-290993
	I0224 13:10:30.820568  933673 main.go:141] libmachine: (pause-290993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:f4:6e", ip: ""} in network mk-pause-290993: {Iface:virbr4 ExpiryTime:2025-02-24 14:09:37 +0000 UTC Type:0 Mac:52:54:00:1b:f4:6e Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:pause-290993 Clientid:01:52:54:00:1b:f4:6e}
	I0224 13:10:30.820592  933673 main.go:141] libmachine: (pause-290993) DBG | domain pause-290993 has defined IP address 192.168.72.181 and MAC address 52:54:00:1b:f4:6e in network mk-pause-290993
	I0224 13:10:30.820844  933673 main.go:141] libmachine: (pause-290993) Calling .GetSSHPort
	I0224 13:10:30.821047  933673 main.go:141] libmachine: (pause-290993) Calling .GetSSHKeyPath
	I0224 13:10:30.821221  933673 main.go:141] libmachine: (pause-290993) Calling .GetSSHKeyPath
	I0224 13:10:30.821418  933673 main.go:141] libmachine: (pause-290993) Calling .GetSSHUsername
	I0224 13:10:30.821599  933673 main.go:141] libmachine: Using SSH client type: native
	I0224 13:10:30.821840  933673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.72.181 22 <nil> <nil>}
	I0224 13:10:30.821858  933673 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0224 13:10:30.926511  933673 main.go:141] libmachine: SSH cmd err, output: <nil>: 1740402630.920909166
	
	I0224 13:10:30.926543  933673 fix.go:216] guest clock: 1740402630.920909166
	I0224 13:10:30.926551  933673 fix.go:229] Guest: 2025-02-24 13:10:30.920909166 +0000 UTC Remote: 2025-02-24 13:10:30.817392873 +0000 UTC m=+12.977542167 (delta=103.516293ms)
	I0224 13:10:30.926591  933673 fix.go:200] guest clock delta is within tolerance: 103.516293ms
	I0224 13:10:30.926596  933673 start.go:83] releasing machines lock for "pause-290993", held for 6.559775586s
	I0224 13:10:30.926614  933673 main.go:141] libmachine: (pause-290993) Calling .DriverName
	I0224 13:10:30.926905  933673 main.go:141] libmachine: (pause-290993) Calling .GetIP
	I0224 13:10:30.929799  933673 main.go:141] libmachine: (pause-290993) DBG | domain pause-290993 has defined MAC address 52:54:00:1b:f4:6e in network mk-pause-290993
	I0224 13:10:30.930196  933673 main.go:141] libmachine: (pause-290993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:f4:6e", ip: ""} in network mk-pause-290993: {Iface:virbr4 ExpiryTime:2025-02-24 14:09:37 +0000 UTC Type:0 Mac:52:54:00:1b:f4:6e Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:pause-290993 Clientid:01:52:54:00:1b:f4:6e}
	I0224 13:10:30.930229  933673 main.go:141] libmachine: (pause-290993) DBG | domain pause-290993 has defined IP address 192.168.72.181 and MAC address 52:54:00:1b:f4:6e in network mk-pause-290993
	I0224 13:10:30.930382  933673 main.go:141] libmachine: (pause-290993) Calling .DriverName
	I0224 13:10:30.931135  933673 main.go:141] libmachine: (pause-290993) Calling .DriverName
	I0224 13:10:30.931357  933673 main.go:141] libmachine: (pause-290993) Calling .DriverName
	I0224 13:10:30.931482  933673 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0224 13:10:30.931542  933673 main.go:141] libmachine: (pause-290993) Calling .GetSSHHostname
	I0224 13:10:30.931819  933673 ssh_runner.go:195] Run: cat /version.json
	I0224 13:10:30.931845  933673 main.go:141] libmachine: (pause-290993) Calling .GetSSHHostname
	I0224 13:10:30.934571  933673 main.go:141] libmachine: (pause-290993) DBG | domain pause-290993 has defined MAC address 52:54:00:1b:f4:6e in network mk-pause-290993
	I0224 13:10:30.934944  933673 main.go:141] libmachine: (pause-290993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:f4:6e", ip: ""} in network mk-pause-290993: {Iface:virbr4 ExpiryTime:2025-02-24 14:09:37 +0000 UTC Type:0 Mac:52:54:00:1b:f4:6e Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:pause-290993 Clientid:01:52:54:00:1b:f4:6e}
	I0224 13:10:30.934973  933673 main.go:141] libmachine: (pause-290993) DBG | domain pause-290993 has defined IP address 192.168.72.181 and MAC address 52:54:00:1b:f4:6e in network mk-pause-290993
	I0224 13:10:30.935000  933673 main.go:141] libmachine: (pause-290993) DBG | domain pause-290993 has defined MAC address 52:54:00:1b:f4:6e in network mk-pause-290993
	I0224 13:10:30.935193  933673 main.go:141] libmachine: (pause-290993) Calling .GetSSHPort
	I0224 13:10:30.935390  933673 main.go:141] libmachine: (pause-290993) Calling .GetSSHKeyPath
	I0224 13:10:30.935600  933673 main.go:141] libmachine: (pause-290993) Calling .GetSSHUsername
	I0224 13:10:30.935628  933673 main.go:141] libmachine: (pause-290993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:f4:6e", ip: ""} in network mk-pause-290993: {Iface:virbr4 ExpiryTime:2025-02-24 14:09:37 +0000 UTC Type:0 Mac:52:54:00:1b:f4:6e Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:pause-290993 Clientid:01:52:54:00:1b:f4:6e}
	I0224 13:10:30.935664  933673 main.go:141] libmachine: (pause-290993) DBG | domain pause-290993 has defined IP address 192.168.72.181 and MAC address 52:54:00:1b:f4:6e in network mk-pause-290993
	I0224 13:10:30.935771  933673 sshutil.go:53] new ssh client: &{IP:192.168.72.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/pause-290993/id_rsa Username:docker}
	I0224 13:10:30.935863  933673 main.go:141] libmachine: (pause-290993) Calling .GetSSHPort
	I0224 13:10:30.936020  933673 main.go:141] libmachine: (pause-290993) Calling .GetSSHKeyPath
	I0224 13:10:30.936147  933673 main.go:141] libmachine: (pause-290993) Calling .GetSSHUsername
	I0224 13:10:30.936283  933673 sshutil.go:53] new ssh client: &{IP:192.168.72.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/pause-290993/id_rsa Username:docker}
	I0224 13:10:31.016513  933673 ssh_runner.go:195] Run: systemctl --version
	I0224 13:10:31.038738  933673 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0224 13:10:31.195513  933673 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0224 13:10:31.204562  933673 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0224 13:10:31.204654  933673 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0224 13:10:31.214751  933673 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0224 13:10:31.214779  933673 start.go:495] detecting cgroup driver to use...
	I0224 13:10:31.214859  933673 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0224 13:10:31.233941  933673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0224 13:10:31.250249  933673 docker.go:217] disabling cri-docker service (if available) ...
	I0224 13:10:31.250318  933673 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0224 13:10:31.265896  933673 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0224 13:10:31.281077  933673 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0224 13:10:31.424182  933673 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0224 13:10:31.562502  933673 docker.go:233] disabling docker service ...
	I0224 13:10:31.562586  933673 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0224 13:10:31.579931  933673 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0224 13:10:31.595068  933673 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0224 13:10:31.741269  933673 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0224 13:10:31.878666  933673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0224 13:10:31.893838  933673 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 13:10:31.914985  933673 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0224 13:10:31.915046  933673 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 13:10:31.926442  933673 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0224 13:10:31.926516  933673 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 13:10:31.937995  933673 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 13:10:31.949291  933673 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 13:10:31.960362  933673 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0224 13:10:31.972216  933673 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 13:10:31.983735  933673 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 13:10:31.997243  933673 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 13:10:32.010206  933673 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0224 13:10:32.020967  933673 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0224 13:10:32.031172  933673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 13:10:32.160080  933673 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0224 13:10:32.380983  933673 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0224 13:10:32.381072  933673 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0224 13:10:32.386770  933673 start.go:563] Will wait 60s for crictl version
	I0224 13:10:32.386845  933673 ssh_runner.go:195] Run: which crictl
	I0224 13:10:32.390834  933673 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0224 13:10:32.429261  933673 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0224 13:10:32.429385  933673 ssh_runner.go:195] Run: crio --version
	I0224 13:10:32.463852  933673 ssh_runner.go:195] Run: crio --version
	I0224 13:10:32.496225  933673 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0224 13:10:32.497563  933673 main.go:141] libmachine: (pause-290993) Calling .GetIP
	I0224 13:10:32.500429  933673 main.go:141] libmachine: (pause-290993) DBG | domain pause-290993 has defined MAC address 52:54:00:1b:f4:6e in network mk-pause-290993
	I0224 13:10:32.500827  933673 main.go:141] libmachine: (pause-290993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:f4:6e", ip: ""} in network mk-pause-290993: {Iface:virbr4 ExpiryTime:2025-02-24 14:09:37 +0000 UTC Type:0 Mac:52:54:00:1b:f4:6e Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:pause-290993 Clientid:01:52:54:00:1b:f4:6e}
	I0224 13:10:32.500855  933673 main.go:141] libmachine: (pause-290993) DBG | domain pause-290993 has defined IP address 192.168.72.181 and MAC address 52:54:00:1b:f4:6e in network mk-pause-290993
	I0224 13:10:32.501014  933673 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0224 13:10:32.505472  933673 kubeadm.go:883] updating cluster {Name:pause-290993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:pause-290993 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.181 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-secu
rity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0224 13:10:32.505601  933673 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0224 13:10:32.505650  933673 ssh_runner.go:195] Run: sudo crictl images --output json
	I0224 13:10:32.552912  933673 crio.go:514] all images are preloaded for cri-o runtime.
	I0224 13:10:32.552938  933673 crio.go:433] Images already preloaded, skipping extraction
	I0224 13:10:32.552984  933673 ssh_runner.go:195] Run: sudo crictl images --output json
	I0224 13:10:32.588125  933673 crio.go:514] all images are preloaded for cri-o runtime.
	I0224 13:10:32.588152  933673 cache_images.go:84] Images are preloaded, skipping loading
	I0224 13:10:32.588167  933673 kubeadm.go:934] updating node { 192.168.72.181 8443 v1.32.2 crio true true} ...
	I0224 13:10:32.588292  933673 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-290993 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.181
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:pause-290993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0224 13:10:32.588355  933673 ssh_runner.go:195] Run: crio config
	I0224 13:10:32.641401  933673 cni.go:84] Creating CNI manager for ""
	I0224 13:10:32.641429  933673 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0224 13:10:32.641447  933673 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0224 13:10:32.641475  933673 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.181 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-290993 NodeName:pause-290993 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.181"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.181 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0224 13:10:32.641610  933673 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.181
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-290993"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.181"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.181"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0224 13:10:32.641682  933673 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0224 13:10:32.653139  933673 binaries.go:44] Found k8s binaries, skipping transfer
	I0224 13:10:32.653207  933673 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0224 13:10:32.664355  933673 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0224 13:10:32.686291  933673 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0224 13:10:32.706652  933673 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0224 13:10:32.726267  933673 ssh_runner.go:195] Run: grep 192.168.72.181	control-plane.minikube.internal$ /etc/hosts
	I0224 13:10:32.730677  933673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 13:10:32.956986  933673 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0224 13:10:33.066027  933673 certs.go:68] Setting up /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/pause-290993 for IP: 192.168.72.181
	I0224 13:10:33.066060  933673 certs.go:194] generating shared ca certs ...
	I0224 13:10:33.066083  933673 certs.go:226] acquiring lock for ca certs: {Name:mk38777c6b180f63d1816020cff79a01106ddf33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 13:10:33.066330  933673 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20451-887294/.minikube/ca.key
	I0224 13:10:33.066396  933673 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20451-887294/.minikube/proxy-client-ca.key
	I0224 13:10:33.066411  933673 certs.go:256] generating profile certs ...
	I0224 13:10:33.066549  933673 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/pause-290993/client.key
	I0224 13:10:33.066649  933673 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/pause-290993/apiserver.key.db9d6d8e
	I0224 13:10:33.066710  933673 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/pause-290993/proxy-client.key
	I0224 13:10:33.066884  933673 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/894564.pem (1338 bytes)
	W0224 13:10:33.066932  933673 certs.go:480] ignoring /home/jenkins/minikube-integration/20451-887294/.minikube/certs/894564_empty.pem, impossibly tiny 0 bytes
	I0224 13:10:33.066947  933673 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca-key.pem (1679 bytes)
	I0224 13:10:33.067062  933673 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem (1082 bytes)
	I0224 13:10:33.067107  933673 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/cert.pem (1123 bytes)
	I0224 13:10:33.067139  933673 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/key.pem (1679 bytes)
	I0224 13:10:33.067211  933673 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/files/etc/ssl/certs/8945642.pem (1708 bytes)
	I0224 13:10:33.068182  933673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0224 13:10:33.184997  933673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0224 13:10:33.430494  933673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0224 13:10:33.609883  933673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0224 13:10:33.741492  933673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/pause-290993/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0224 13:10:33.845932  933673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/pause-290993/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0224 13:10:33.925072  933673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/pause-290993/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0224 13:10:33.997049  933673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/pause-290993/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0224 13:10:34.047365  933673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/certs/894564.pem --> /usr/share/ca-certificates/894564.pem (1338 bytes)
	I0224 13:10:34.084055  933673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/files/etc/ssl/certs/8945642.pem --> /usr/share/ca-certificates/8945642.pem (1708 bytes)
	I0224 13:10:34.122047  933673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0224 13:10:34.155304  933673 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0224 13:10:34.182021  933673 ssh_runner.go:195] Run: openssl version
	I0224 13:10:34.190307  933673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/894564.pem && ln -fs /usr/share/ca-certificates/894564.pem /etc/ssl/certs/894564.pem"
	I0224 13:10:34.208307  933673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/894564.pem
	I0224 13:10:34.214281  933673 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 24 12:09 /usr/share/ca-certificates/894564.pem
	I0224 13:10:34.214358  933673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/894564.pem
	I0224 13:10:34.223314  933673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/894564.pem /etc/ssl/certs/51391683.0"
	I0224 13:10:34.235496  933673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8945642.pem && ln -fs /usr/share/ca-certificates/8945642.pem /etc/ssl/certs/8945642.pem"
	I0224 13:10:34.252434  933673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8945642.pem
	I0224 13:10:34.261383  933673 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 24 12:09 /usr/share/ca-certificates/8945642.pem
	I0224 13:10:34.261464  933673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8945642.pem
	I0224 13:10:34.278963  933673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8945642.pem /etc/ssl/certs/3ec20f2e.0"
	I0224 13:10:34.340057  933673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0224 13:10:34.372267  933673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0224 13:10:34.386074  933673 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 24 12:01 /usr/share/ca-certificates/minikubeCA.pem
	I0224 13:10:34.386163  933673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0224 13:10:34.398007  933673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0224 13:10:34.415444  933673 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0224 13:10:34.423384  933673 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0224 13:10:34.436909  933673 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0224 13:10:34.447879  933673 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0224 13:10:34.458884  933673 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0224 13:10:34.467929  933673 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0224 13:10:34.478962  933673 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0224 13:10:34.492118  933673 kubeadm.go:392] StartCluster: {Name:pause-290993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:pause-290993 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.181 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-securit
y-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0224 13:10:34.492286  933673 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0224 13:10:34.492365  933673 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0224 13:10:34.567713  933673 cri.go:89] found id: "8f1ea71953e2c9b870aaeb5ec033589e6c70cf49214670b79afaab38ade0a6e7"
	I0224 13:10:34.567741  933673 cri.go:89] found id: "f69fc3df30b8e460e2137176d137957ab7add0f72b0fe68f02803a95f23ff915"
	I0224 13:10:34.567747  933673 cri.go:89] found id: "ba98184a2895c8a63929c78ca71192f47dc2c5957bbe799ed35a24f9bfcd63eb"
	I0224 13:10:34.567751  933673 cri.go:89] found id: "ae1f36dc49e5c36127b0839d3e0b61bc18d838bb8a0c448b55d55f1a6191df34"
	I0224 13:10:34.567756  933673 cri.go:89] found id: "5f1a789af555fca56718d4d1dbba6bd69970fc8e2158cfd4f1d4f49f36bfcfbc"
	I0224 13:10:34.567761  933673 cri.go:89] found id: "c0eeb0c95859b126474282095e6479908c48c39791e874d93d6bb6eb25e0bbaa"
	I0224 13:10:34.567765  933673 cri.go:89] found id: "bd4793faf8b72327b448c56eab7400795635745f8f062b8ea077b03097a7b3cf"
	I0224 13:10:34.567768  933673 cri.go:89] found id: "f237b232fdec99430485abef583ce41dec89778139728d82ca06b2cf4409763d"
	I0224 13:10:34.567772  933673 cri.go:89] found id: "196ca6bd4be4c2a9e9a99043a717de0fab0cc2447f80e3e3225eb544c49fb133"
	I0224 13:10:34.567787  933673 cri.go:89] found id: "cdda937e1591fa55620b27c7664e95ffdfa2231dee821720fbda8e7a42f127f3"
	I0224 13:10:34.567792  933673 cri.go:89] found id: "ccadc4e83c360fb5203180e60ca70b0c1a5132fc83e1cf4c161c5234961453b1"
	I0224 13:10:34.567797  933673 cri.go:89] found id: "d98bc933fca32223e744e465ea878043f12b0bb8a445c1ec8c360b1081a2a844"
	I0224 13:10:34.567801  933673 cri.go:89] found id: ""
	I0224 13:10:34.567858  933673 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-290993 -n pause-290993
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-290993 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-290993 logs -n 25: (1.434284182s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-799329 sudo                  | cilium-799329             | jenkins | v1.35.0 | 24 Feb 25 13:08 UTC |                     |
	|         | systemctl cat containerd               |                           |         |         |                     |                     |
	|         | --no-pager                             |                           |         |         |                     |                     |
	| ssh     | -p cilium-799329 sudo cat              | cilium-799329             | jenkins | v1.35.0 | 24 Feb 25 13:08 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-799329 sudo cat              | cilium-799329             | jenkins | v1.35.0 | 24 Feb 25 13:08 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| ssh     | -p cilium-799329 sudo                  | cilium-799329             | jenkins | v1.35.0 | 24 Feb 25 13:08 UTC |                     |
	|         | containerd config dump                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-799329 sudo                  | cilium-799329             | jenkins | v1.35.0 | 24 Feb 25 13:08 UTC |                     |
	|         | systemctl status crio --all            |                           |         |         |                     |                     |
	|         | --full --no-pager                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-799329 sudo                  | cilium-799329             | jenkins | v1.35.0 | 24 Feb 25 13:08 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p cilium-799329 sudo find             | cilium-799329             | jenkins | v1.35.0 | 24 Feb 25 13:08 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-799329 sudo crio             | cilium-799329             | jenkins | v1.35.0 | 24 Feb 25 13:08 UTC |                     |
	|         | config                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-799329                       | cilium-799329             | jenkins | v1.35.0 | 24 Feb 25 13:08 UTC | 24 Feb 25 13:08 UTC |
	| start   | -p kubernetes-upgrade-973775           | kubernetes-upgrade-973775 | jenkins | v1.35.0 | 24 Feb 25 13:08 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p offline-crio-226975                 | offline-crio-226975       | jenkins | v1.35.0 | 24 Feb 25 13:08 UTC | 24 Feb 25 13:08 UTC |
	| start   | -p pause-290993 --memory=2048          | pause-290993              | jenkins | v1.35.0 | 24 Feb 25 13:08 UTC | 24 Feb 25 13:10 UTC |
	|         | --install-addons=false                 |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2               |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-248837                 | NoKubernetes-248837       | jenkins | v1.35.0 | 24 Feb 25 13:08 UTC | 24 Feb 25 13:09 UTC |
	|         | --no-kubernetes --driver=kvm2          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p running-upgrade-271664              | running-upgrade-271664    | jenkins | v1.35.0 | 24 Feb 25 13:09 UTC | 24 Feb 25 13:11 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-248837                 | NoKubernetes-248837       | jenkins | v1.35.0 | 24 Feb 25 13:09 UTC | 24 Feb 25 13:09 UTC |
	| start   | -p NoKubernetes-248837                 | NoKubernetes-248837       | jenkins | v1.35.0 | 24 Feb 25 13:09 UTC | 24 Feb 25 13:10 UTC |
	|         | --no-kubernetes --driver=kvm2          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p pause-290993                        | pause-290993              | jenkins | v1.35.0 | 24 Feb 25 13:10 UTC | 24 Feb 25 13:11 UTC |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-248837 sudo            | NoKubernetes-248837       | jenkins | v1.35.0 | 24 Feb 25 13:10 UTC |                     |
	|         | systemctl is-active --quiet            |                           |         |         |                     |                     |
	|         | service kubelet                        |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-248837                 | NoKubernetes-248837       | jenkins | v1.35.0 | 24 Feb 25 13:10 UTC | 24 Feb 25 13:10 UTC |
	| start   | -p NoKubernetes-248837                 | NoKubernetes-248837       | jenkins | v1.35.0 | 24 Feb 25 13:10 UTC | 24 Feb 25 13:11 UTC |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-271664              | running-upgrade-271664    | jenkins | v1.35.0 | 24 Feb 25 13:11 UTC | 24 Feb 25 13:11 UTC |
	| start   | -p force-systemd-flag-705501           | force-systemd-flag-705501 | jenkins | v1.35.0 | 24 Feb 25 13:11 UTC |                     |
	|         | --memory=2048 --force-systemd          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-248837 sudo            | NoKubernetes-248837       | jenkins | v1.35.0 | 24 Feb 25 13:11 UTC |                     |
	|         | systemctl is-active --quiet            |                           |         |         |                     |                     |
	|         | service kubelet                        |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-248837                 | NoKubernetes-248837       | jenkins | v1.35.0 | 24 Feb 25 13:11 UTC | 24 Feb 25 13:11 UTC |
	| start   | -p cert-expiration-993480              | cert-expiration-993480    | jenkins | v1.35.0 | 24 Feb 25 13:11 UTC |                     |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/24 13:11:17
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0224 13:11:17.166257  934589 out.go:345] Setting OutFile to fd 1 ...
	I0224 13:11:17.166487  934589 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 13:11:17.166491  934589 out.go:358] Setting ErrFile to fd 2...
	I0224 13:11:17.166494  934589 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 13:11:17.166709  934589 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-887294/.minikube/bin
	I0224 13:11:17.167324  934589 out.go:352] Setting JSON to false
	I0224 13:11:17.168425  934589 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10418,"bootTime":1740392259,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0224 13:11:17.168524  934589 start.go:139] virtualization: kvm guest
	I0224 13:11:17.171025  934589 out.go:177] * [cert-expiration-993480] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0224 13:11:17.172583  934589 out.go:177]   - MINIKUBE_LOCATION=20451
	I0224 13:11:17.172612  934589 notify.go:220] Checking for updates...
	I0224 13:11:17.175170  934589 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 13:11:17.176837  934589 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20451-887294/kubeconfig
	I0224 13:11:17.178345  934589 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-887294/.minikube
	I0224 13:11:17.179808  934589 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0224 13:11:17.181135  934589 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0224 13:11:17.183142  934589 config.go:182] Loaded profile config "force-systemd-flag-705501": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0224 13:11:17.183295  934589 config.go:182] Loaded profile config "kubernetes-upgrade-973775": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0224 13:11:17.183465  934589 config.go:182] Loaded profile config "pause-290993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0224 13:11:17.183606  934589 driver.go:394] Setting default libvirt URI to qemu:///system
	I0224 13:11:17.222934  934589 out.go:177] * Using the kvm2 driver based on user configuration
	I0224 13:11:17.224501  934589 start.go:297] selected driver: kvm2
	I0224 13:11:17.224515  934589 start.go:901] validating driver "kvm2" against <nil>
	I0224 13:11:17.224527  934589 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0224 13:11:17.225323  934589 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 13:11:17.225410  934589 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20451-887294/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0224 13:11:17.243984  934589 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0224 13:11:17.244028  934589 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0224 13:11:17.244297  934589 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0224 13:11:17.244330  934589 cni.go:84] Creating CNI manager for ""
	I0224 13:11:17.244370  934589 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0224 13:11:17.244376  934589 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0224 13:11:17.244446  934589 start.go:340] cluster config:
	{Name:cert-expiration-993480 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:cert-expiration-993480 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0224 13:11:17.244582  934589 iso.go:125] acquiring lock: {Name:mk57408cca66a96a13d93cda43cdfac6e61aef3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 13:11:17.249767  934589 out.go:177] * Starting "cert-expiration-993480" primary control-plane node in "cert-expiration-993480" cluster
	I0224 13:11:16.487537  933673 addons.go:514] duration metric: took 3.740369ms for enable addons: enabled=[]
	I0224 13:11:16.488454  933673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 13:11:16.679569  933673 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0224 13:11:16.700972  933673 node_ready.go:35] waiting up to 6m0s for node "pause-290993" to be "Ready" ...
	I0224 13:11:16.703818  933673 node_ready.go:49] node "pause-290993" has status "Ready":"True"
	I0224 13:11:16.703858  933673 node_ready.go:38] duration metric: took 2.823027ms for node "pause-290993" to be "Ready" ...
	I0224 13:11:16.703872  933673 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 13:11:16.706865  933673 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-sqwj8" in "kube-system" namespace to be "Ready" ...
	I0224 13:11:16.713069  933673 pod_ready.go:93] pod "coredns-668d6bf9bc-sqwj8" in "kube-system" namespace has status "Ready":"True"
	I0224 13:11:16.713094  933673 pod_ready.go:82] duration metric: took 6.192977ms for pod "coredns-668d6bf9bc-sqwj8" in "kube-system" namespace to be "Ready" ...
	I0224 13:11:16.713104  933673 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-290993" in "kube-system" namespace to be "Ready" ...
	I0224 13:11:17.031260  933673 pod_ready.go:93] pod "etcd-pause-290993" in "kube-system" namespace has status "Ready":"True"
	I0224 13:11:17.031294  933673 pod_ready.go:82] duration metric: took 318.182683ms for pod "etcd-pause-290993" in "kube-system" namespace to be "Ready" ...
	I0224 13:11:17.031311  933673 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-290993" in "kube-system" namespace to be "Ready" ...
	I0224 13:11:17.429529  933673 pod_ready.go:93] pod "kube-apiserver-pause-290993" in "kube-system" namespace has status "Ready":"True"
	I0224 13:11:17.429562  933673 pod_ready.go:82] duration metric: took 398.239067ms for pod "kube-apiserver-pause-290993" in "kube-system" namespace to be "Ready" ...
	I0224 13:11:17.429578  933673 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-290993" in "kube-system" namespace to be "Ready" ...
	I0224 13:11:17.829302  933673 pod_ready.go:93] pod "kube-controller-manager-pause-290993" in "kube-system" namespace has status "Ready":"True"
	I0224 13:11:17.829360  933673 pod_ready.go:82] duration metric: took 399.774384ms for pod "kube-controller-manager-pause-290993" in "kube-system" namespace to be "Ready" ...
	I0224 13:11:17.829374  933673 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mk2vg" in "kube-system" namespace to be "Ready" ...
	I0224 13:11:14.121503  934320 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0224 13:11:14.121793  934320 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:11:14.121873  934320 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:11:14.139268  934320 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41663
	I0224 13:11:14.139733  934320 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:11:14.140428  934320 main.go:141] libmachine: Using API Version  1
	I0224 13:11:14.140449  934320 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:11:14.140867  934320 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:11:14.141124  934320 main.go:141] libmachine: (force-systemd-flag-705501) Calling .GetMachineName
	I0224 13:11:14.141349  934320 main.go:141] libmachine: (force-systemd-flag-705501) Calling .DriverName
	I0224 13:11:14.141560  934320 start.go:159] libmachine.API.Create for "force-systemd-flag-705501" (driver="kvm2")
	I0224 13:11:14.141610  934320 client.go:168] LocalClient.Create starting
	I0224 13:11:14.141659  934320 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem
	I0224 13:11:14.141711  934320 main.go:141] libmachine: Decoding PEM data...
	I0224 13:11:14.141731  934320 main.go:141] libmachine: Parsing certificate...
	I0224 13:11:14.141812  934320 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20451-887294/.minikube/certs/cert.pem
	I0224 13:11:14.141841  934320 main.go:141] libmachine: Decoding PEM data...
	I0224 13:11:14.141862  934320 main.go:141] libmachine: Parsing certificate...
	I0224 13:11:14.141890  934320 main.go:141] libmachine: Running pre-create checks...
	I0224 13:11:14.141906  934320 main.go:141] libmachine: (force-systemd-flag-705501) Calling .PreCreateCheck
	I0224 13:11:14.142391  934320 main.go:141] libmachine: (force-systemd-flag-705501) Calling .GetConfigRaw
	I0224 13:11:14.142858  934320 main.go:141] libmachine: Creating machine...
	I0224 13:11:14.142873  934320 main.go:141] libmachine: (force-systemd-flag-705501) Calling .Create
	I0224 13:11:14.143012  934320 main.go:141] libmachine: (force-systemd-flag-705501) creating KVM machine...
	I0224 13:11:14.143037  934320 main.go:141] libmachine: (force-systemd-flag-705501) creating network...
	I0224 13:11:14.144496  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | found existing default KVM network
	I0224 13:11:14.146225  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | I0224 13:11:14.146056  934382 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000266180}
	I0224 13:11:14.146251  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | created network xml: 
	I0224 13:11:14.146272  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | <network>
	I0224 13:11:14.146281  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG |   <name>mk-force-systemd-flag-705501</name>
	I0224 13:11:14.146291  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG |   <dns enable='no'/>
	I0224 13:11:14.146309  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG |   
	I0224 13:11:14.146324  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0224 13:11:14.146339  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG |     <dhcp>
	I0224 13:11:14.146354  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0224 13:11:14.146365  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG |     </dhcp>
	I0224 13:11:14.146377  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG |   </ip>
	I0224 13:11:14.146387  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG |   
	I0224 13:11:14.146395  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | </network>
	I0224 13:11:14.146405  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | 
	I0224 13:11:14.152234  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | trying to create private KVM network mk-force-systemd-flag-705501 192.168.39.0/24...
	I0224 13:11:14.232218  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | private KVM network mk-force-systemd-flag-705501 192.168.39.0/24 created
	I0224 13:11:14.232312  934320 main.go:141] libmachine: (force-systemd-flag-705501) setting up store path in /home/jenkins/minikube-integration/20451-887294/.minikube/machines/force-systemd-flag-705501 ...
	I0224 13:11:14.232342  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | I0224 13:11:14.232227  934382 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20451-887294/.minikube
	I0224 13:11:14.232360  934320 main.go:141] libmachine: (force-systemd-flag-705501) building disk image from file:///home/jenkins/minikube-integration/20451-887294/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0224 13:11:14.232469  934320 main.go:141] libmachine: (force-systemd-flag-705501) Downloading /home/jenkins/minikube-integration/20451-887294/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20451-887294/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0224 13:11:14.534873  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | I0224 13:11:14.534682  934382 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20451-887294/.minikube/machines/force-systemd-flag-705501/id_rsa...
	I0224 13:11:14.745366  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | I0224 13:11:14.745208  934382 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20451-887294/.minikube/machines/force-systemd-flag-705501/force-systemd-flag-705501.rawdisk...
	I0224 13:11:14.745399  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | Writing magic tar header
	I0224 13:11:14.745415  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | Writing SSH key tar header
	I0224 13:11:14.745430  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | I0224 13:11:14.745384  934382 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20451-887294/.minikube/machines/force-systemd-flag-705501 ...
	I0224 13:11:14.745571  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20451-887294/.minikube/machines/force-systemd-flag-705501
	I0224 13:11:14.745603  934320 main.go:141] libmachine: (force-systemd-flag-705501) setting executable bit set on /home/jenkins/minikube-integration/20451-887294/.minikube/machines/force-systemd-flag-705501 (perms=drwx------)
	I0224 13:11:14.745635  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20451-887294/.minikube/machines
	I0224 13:11:14.745656  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20451-887294/.minikube
	I0224 13:11:14.745669  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20451-887294
	I0224 13:11:14.745683  934320 main.go:141] libmachine: (force-systemd-flag-705501) setting executable bit set on /home/jenkins/minikube-integration/20451-887294/.minikube/machines (perms=drwxr-xr-x)
	I0224 13:11:14.745700  934320 main.go:141] libmachine: (force-systemd-flag-705501) setting executable bit set on /home/jenkins/minikube-integration/20451-887294/.minikube (perms=drwxr-xr-x)
	I0224 13:11:14.745712  934320 main.go:141] libmachine: (force-systemd-flag-705501) setting executable bit set on /home/jenkins/minikube-integration/20451-887294 (perms=drwxrwxr-x)
	I0224 13:11:14.745724  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0224 13:11:14.745737  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | checking permissions on dir: /home/jenkins
	I0224 13:11:14.745746  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | checking permissions on dir: /home
	I0224 13:11:14.745758  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | skipping /home - not owner
	I0224 13:11:14.745769  934320 main.go:141] libmachine: (force-systemd-flag-705501) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0224 13:11:14.745783  934320 main.go:141] libmachine: (force-systemd-flag-705501) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0224 13:11:14.745792  934320 main.go:141] libmachine: (force-systemd-flag-705501) creating domain...
	I0224 13:11:14.747007  934320 main.go:141] libmachine: (force-systemd-flag-705501) define libvirt domain using xml: 
	I0224 13:11:14.747029  934320 main.go:141] libmachine: (force-systemd-flag-705501) <domain type='kvm'>
	I0224 13:11:14.747039  934320 main.go:141] libmachine: (force-systemd-flag-705501)   <name>force-systemd-flag-705501</name>
	I0224 13:11:14.747047  934320 main.go:141] libmachine: (force-systemd-flag-705501)   <memory unit='MiB'>2048</memory>
	I0224 13:11:14.747055  934320 main.go:141] libmachine: (force-systemd-flag-705501)   <vcpu>2</vcpu>
	I0224 13:11:14.747062  934320 main.go:141] libmachine: (force-systemd-flag-705501)   <features>
	I0224 13:11:14.747069  934320 main.go:141] libmachine: (force-systemd-flag-705501)     <acpi/>
	I0224 13:11:14.747076  934320 main.go:141] libmachine: (force-systemd-flag-705501)     <apic/>
	I0224 13:11:14.747086  934320 main.go:141] libmachine: (force-systemd-flag-705501)     <pae/>
	I0224 13:11:14.747094  934320 main.go:141] libmachine: (force-systemd-flag-705501)     
	I0224 13:11:14.747105  934320 main.go:141] libmachine: (force-systemd-flag-705501)   </features>
	I0224 13:11:14.747116  934320 main.go:141] libmachine: (force-systemd-flag-705501)   <cpu mode='host-passthrough'>
	I0224 13:11:14.747125  934320 main.go:141] libmachine: (force-systemd-flag-705501)   
	I0224 13:11:14.747132  934320 main.go:141] libmachine: (force-systemd-flag-705501)   </cpu>
	I0224 13:11:14.747198  934320 main.go:141] libmachine: (force-systemd-flag-705501)   <os>
	I0224 13:11:14.747224  934320 main.go:141] libmachine: (force-systemd-flag-705501)     <type>hvm</type>
	I0224 13:11:14.747237  934320 main.go:141] libmachine: (force-systemd-flag-705501)     <boot dev='cdrom'/>
	I0224 13:11:14.747249  934320 main.go:141] libmachine: (force-systemd-flag-705501)     <boot dev='hd'/>
	I0224 13:11:14.747262  934320 main.go:141] libmachine: (force-systemd-flag-705501)     <bootmenu enable='no'/>
	I0224 13:11:14.747272  934320 main.go:141] libmachine: (force-systemd-flag-705501)   </os>
	I0224 13:11:14.747283  934320 main.go:141] libmachine: (force-systemd-flag-705501)   <devices>
	I0224 13:11:14.747298  934320 main.go:141] libmachine: (force-systemd-flag-705501)     <disk type='file' device='cdrom'>
	I0224 13:11:14.747321  934320 main.go:141] libmachine: (force-systemd-flag-705501)       <source file='/home/jenkins/minikube-integration/20451-887294/.minikube/machines/force-systemd-flag-705501/boot2docker.iso'/>
	I0224 13:11:14.747335  934320 main.go:141] libmachine: (force-systemd-flag-705501)       <target dev='hdc' bus='scsi'/>
	I0224 13:11:14.747346  934320 main.go:141] libmachine: (force-systemd-flag-705501)       <readonly/>
	I0224 13:11:14.747357  934320 main.go:141] libmachine: (force-systemd-flag-705501)     </disk>
	I0224 13:11:14.747368  934320 main.go:141] libmachine: (force-systemd-flag-705501)     <disk type='file' device='disk'>
	I0224 13:11:14.747382  934320 main.go:141] libmachine: (force-systemd-flag-705501)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0224 13:11:14.747406  934320 main.go:141] libmachine: (force-systemd-flag-705501)       <source file='/home/jenkins/minikube-integration/20451-887294/.minikube/machines/force-systemd-flag-705501/force-systemd-flag-705501.rawdisk'/>
	I0224 13:11:14.747419  934320 main.go:141] libmachine: (force-systemd-flag-705501)       <target dev='hda' bus='virtio'/>
	I0224 13:11:14.747429  934320 main.go:141] libmachine: (force-systemd-flag-705501)     </disk>
	I0224 13:11:14.747438  934320 main.go:141] libmachine: (force-systemd-flag-705501)     <interface type='network'>
	I0224 13:11:14.747450  934320 main.go:141] libmachine: (force-systemd-flag-705501)       <source network='mk-force-systemd-flag-705501'/>
	I0224 13:11:14.747463  934320 main.go:141] libmachine: (force-systemd-flag-705501)       <model type='virtio'/>
	I0224 13:11:14.747477  934320 main.go:141] libmachine: (force-systemd-flag-705501)     </interface>
	I0224 13:11:14.747490  934320 main.go:141] libmachine: (force-systemd-flag-705501)     <interface type='network'>
	I0224 13:11:14.747501  934320 main.go:141] libmachine: (force-systemd-flag-705501)       <source network='default'/>
	I0224 13:11:14.747518  934320 main.go:141] libmachine: (force-systemd-flag-705501)       <model type='virtio'/>
	I0224 13:11:14.747529  934320 main.go:141] libmachine: (force-systemd-flag-705501)     </interface>
	I0224 13:11:14.747538  934320 main.go:141] libmachine: (force-systemd-flag-705501)     <serial type='pty'>
	I0224 13:11:14.747552  934320 main.go:141] libmachine: (force-systemd-flag-705501)       <target port='0'/>
	I0224 13:11:14.747565  934320 main.go:141] libmachine: (force-systemd-flag-705501)     </serial>
	I0224 13:11:14.747575  934320 main.go:141] libmachine: (force-systemd-flag-705501)     <console type='pty'>
	I0224 13:11:14.747587  934320 main.go:141] libmachine: (force-systemd-flag-705501)       <target type='serial' port='0'/>
	I0224 13:11:14.747596  934320 main.go:141] libmachine: (force-systemd-flag-705501)     </console>
	I0224 13:11:14.747608  934320 main.go:141] libmachine: (force-systemd-flag-705501)     <rng model='virtio'>
	I0224 13:11:14.747620  934320 main.go:141] libmachine: (force-systemd-flag-705501)       <backend model='random'>/dev/random</backend>
	I0224 13:11:14.747639  934320 main.go:141] libmachine: (force-systemd-flag-705501)     </rng>
	I0224 13:11:14.747648  934320 main.go:141] libmachine: (force-systemd-flag-705501)     
	I0224 13:11:14.747659  934320 main.go:141] libmachine: (force-systemd-flag-705501)     
	I0224 13:11:14.747668  934320 main.go:141] libmachine: (force-systemd-flag-705501)   </devices>
	I0224 13:11:14.747680  934320 main.go:141] libmachine: (force-systemd-flag-705501) </domain>
	I0224 13:11:14.747694  934320 main.go:141] libmachine: (force-systemd-flag-705501) 
	I0224 13:11:14.751975  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | domain force-systemd-flag-705501 has defined MAC address 52:54:00:ba:d0:37 in network default
	I0224 13:11:14.752651  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | domain force-systemd-flag-705501 has defined MAC address 52:54:00:b5:e9:60 in network mk-force-systemd-flag-705501
	I0224 13:11:14.752669  934320 main.go:141] libmachine: (force-systemd-flag-705501) starting domain...
	I0224 13:11:14.752680  934320 main.go:141] libmachine: (force-systemd-flag-705501) ensuring networks are active...
	I0224 13:11:14.753381  934320 main.go:141] libmachine: (force-systemd-flag-705501) Ensuring network default is active
	I0224 13:11:14.753732  934320 main.go:141] libmachine: (force-systemd-flag-705501) Ensuring network mk-force-systemd-flag-705501 is active
	I0224 13:11:14.754339  934320 main.go:141] libmachine: (force-systemd-flag-705501) getting domain XML...
	I0224 13:11:14.755153  934320 main.go:141] libmachine: (force-systemd-flag-705501) creating domain...
	I0224 13:11:16.049804  934320 main.go:141] libmachine: (force-systemd-flag-705501) waiting for IP...
	I0224 13:11:16.050954  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | domain force-systemd-flag-705501 has defined MAC address 52:54:00:b5:e9:60 in network mk-force-systemd-flag-705501
	I0224 13:11:16.051522  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | unable to find current IP address of domain force-systemd-flag-705501 in network mk-force-systemd-flag-705501
	I0224 13:11:16.051586  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | I0224 13:11:16.051520  934382 retry.go:31] will retry after 250.475537ms: waiting for domain to come up
	I0224 13:11:16.304135  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | domain force-systemd-flag-705501 has defined MAC address 52:54:00:b5:e9:60 in network mk-force-systemd-flag-705501
	I0224 13:11:16.304688  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | unable to find current IP address of domain force-systemd-flag-705501 in network mk-force-systemd-flag-705501
	I0224 13:11:16.304741  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | I0224 13:11:16.304670  934382 retry.go:31] will retry after 239.587801ms: waiting for domain to come up
	I0224 13:11:16.547428  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | domain force-systemd-flag-705501 has defined MAC address 52:54:00:b5:e9:60 in network mk-force-systemd-flag-705501
	I0224 13:11:16.548116  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | unable to find current IP address of domain force-systemd-flag-705501 in network mk-force-systemd-flag-705501
	I0224 13:11:16.548144  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | I0224 13:11:16.548012  934382 retry.go:31] will retry after 447.505277ms: waiting for domain to come up
	I0224 13:11:17.040525  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | domain force-systemd-flag-705501 has defined MAC address 52:54:00:b5:e9:60 in network mk-force-systemd-flag-705501
	I0224 13:11:17.041172  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | unable to find current IP address of domain force-systemd-flag-705501 in network mk-force-systemd-flag-705501
	I0224 13:11:17.041200  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | I0224 13:11:17.041137  934382 retry.go:31] will retry after 485.215487ms: waiting for domain to come up
	I0224 13:11:17.528102  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | domain force-systemd-flag-705501 has defined MAC address 52:54:00:b5:e9:60 in network mk-force-systemd-flag-705501
	I0224 13:11:17.528634  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | unable to find current IP address of domain force-systemd-flag-705501 in network mk-force-systemd-flag-705501
	I0224 13:11:17.528693  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | I0224 13:11:17.528622  934382 retry.go:31] will retry after 480.479367ms: waiting for domain to come up
	I0224 13:11:18.010216  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | domain force-systemd-flag-705501 has defined MAC address 52:54:00:b5:e9:60 in network mk-force-systemd-flag-705501
	I0224 13:11:18.010747  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | unable to find current IP address of domain force-systemd-flag-705501 in network mk-force-systemd-flag-705501
	I0224 13:11:18.010785  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | I0224 13:11:18.010723  934382 retry.go:31] will retry after 651.884594ms: waiting for domain to come up
	I0224 13:11:18.664609  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | domain force-systemd-flag-705501 has defined MAC address 52:54:00:b5:e9:60 in network mk-force-systemd-flag-705501
	I0224 13:11:18.665085  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | unable to find current IP address of domain force-systemd-flag-705501 in network mk-force-systemd-flag-705501
	I0224 13:11:18.665138  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | I0224 13:11:18.665061  934382 retry.go:31] will retry after 757.358789ms: waiting for domain to come up
	I0224 13:11:18.229888  933673 pod_ready.go:93] pod "kube-proxy-mk2vg" in "kube-system" namespace has status "Ready":"True"
	I0224 13:11:18.229913  933673 pod_ready.go:82] duration metric: took 400.533344ms for pod "kube-proxy-mk2vg" in "kube-system" namespace to be "Ready" ...
	I0224 13:11:18.229925  933673 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-290993" in "kube-system" namespace to be "Ready" ...
	I0224 13:11:18.629662  933673 pod_ready.go:93] pod "kube-scheduler-pause-290993" in "kube-system" namespace has status "Ready":"True"
	I0224 13:11:18.629700  933673 pod_ready.go:82] duration metric: took 399.766835ms for pod "kube-scheduler-pause-290993" in "kube-system" namespace to be "Ready" ...
	I0224 13:11:18.629713  933673 pod_ready.go:39] duration metric: took 1.925819677s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 13:11:18.629736  933673 api_server.go:52] waiting for apiserver process to appear ...
	I0224 13:11:18.629811  933673 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:11:18.645301  933673 api_server.go:72] duration metric: took 2.161585914s to wait for apiserver process to appear ...
	I0224 13:11:18.645353  933673 api_server.go:88] waiting for apiserver healthz status ...
	I0224 13:11:18.645382  933673 api_server.go:253] Checking apiserver healthz at https://192.168.72.181:8443/healthz ...
	I0224 13:11:18.652414  933673 api_server.go:279] https://192.168.72.181:8443/healthz returned 200:
	ok
	I0224 13:11:18.653383  933673 api_server.go:141] control plane version: v1.32.2
	I0224 13:11:18.653405  933673 api_server.go:131] duration metric: took 8.044526ms to wait for apiserver health ...
	I0224 13:11:18.653413  933673 system_pods.go:43] waiting for kube-system pods to appear ...
	I0224 13:11:18.829296  933673 system_pods.go:59] 6 kube-system pods found
	I0224 13:11:18.829358  933673 system_pods.go:61] "coredns-668d6bf9bc-sqwj8" [216792f5-1104-4be5-bd91-c56ec040853c] Running
	I0224 13:11:18.829365  933673 system_pods.go:61] "etcd-pause-290993" [abfc9069-0ed5-4b71-b6e5-13aabd1a0394] Running
	I0224 13:11:18.829368  933673 system_pods.go:61] "kube-apiserver-pause-290993" [8a1a789e-616c-42a7-944b-72e626dc0dee] Running
	I0224 13:11:18.829372  933673 system_pods.go:61] "kube-controller-manager-pause-290993" [8d523d7a-0768-4c4c-bc94-76f57bdd4e09] Running
	I0224 13:11:18.829377  933673 system_pods.go:61] "kube-proxy-mk2vg" [cae36757-e93e-4727-9ed4-f05ee8363e3f] Running
	I0224 13:11:18.829380  933673 system_pods.go:61] "kube-scheduler-pause-290993" [86443e75-b6ca-442b-801a-0ec5e6e49621] Running
	I0224 13:11:18.829386  933673 system_pods.go:74] duration metric: took 175.967159ms to wait for pod list to return data ...
	I0224 13:11:18.829393  933673 default_sa.go:34] waiting for default service account to be created ...
	I0224 13:11:19.028934  933673 default_sa.go:45] found service account: "default"
	I0224 13:11:19.028968  933673 default_sa.go:55] duration metric: took 199.569032ms for default service account to be created ...
	I0224 13:11:19.028980  933673 system_pods.go:116] waiting for k8s-apps to be running ...
	I0224 13:11:19.230408  933673 system_pods.go:86] 6 kube-system pods found
	I0224 13:11:19.230455  933673 system_pods.go:89] "coredns-668d6bf9bc-sqwj8" [216792f5-1104-4be5-bd91-c56ec040853c] Running
	I0224 13:11:19.230464  933673 system_pods.go:89] "etcd-pause-290993" [abfc9069-0ed5-4b71-b6e5-13aabd1a0394] Running
	I0224 13:11:19.230471  933673 system_pods.go:89] "kube-apiserver-pause-290993" [8a1a789e-616c-42a7-944b-72e626dc0dee] Running
	I0224 13:11:19.230477  933673 system_pods.go:89] "kube-controller-manager-pause-290993" [8d523d7a-0768-4c4c-bc94-76f57bdd4e09] Running
	I0224 13:11:19.230482  933673 system_pods.go:89] "kube-proxy-mk2vg" [cae36757-e93e-4727-9ed4-f05ee8363e3f] Running
	I0224 13:11:19.230490  933673 system_pods.go:89] "kube-scheduler-pause-290993" [86443e75-b6ca-442b-801a-0ec5e6e49621] Running
	I0224 13:11:19.230516  933673 system_pods.go:126] duration metric: took 201.528433ms to wait for k8s-apps to be running ...
	I0224 13:11:19.230527  933673 system_svc.go:44] waiting for kubelet service to be running ....
	I0224 13:11:19.230590  933673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 13:11:19.249909  933673 system_svc.go:56] duration metric: took 19.371763ms WaitForService to wait for kubelet
	I0224 13:11:19.249955  933673 kubeadm.go:582] duration metric: took 2.766242466s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0224 13:11:19.249980  933673 node_conditions.go:102] verifying NodePressure condition ...
	I0224 13:11:19.430208  933673 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0224 13:11:19.430236  933673 node_conditions.go:123] node cpu capacity is 2
	I0224 13:11:19.430250  933673 node_conditions.go:105] duration metric: took 180.263061ms to run NodePressure ...
	I0224 13:11:19.430264  933673 start.go:241] waiting for startup goroutines ...
	I0224 13:11:19.430271  933673 start.go:246] waiting for cluster config update ...
	I0224 13:11:19.430279  933673 start.go:255] writing updated cluster config ...
	I0224 13:11:19.430581  933673 ssh_runner.go:195] Run: rm -f paused
	I0224 13:11:19.485369  933673 start.go:600] kubectl: 1.32.2, cluster: 1.32.2 (minor skew: 0)
	I0224 13:11:19.487532  933673 out.go:177] * Done! kubectl is now configured to use "pause-290993" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 24 13:11:20 pause-290993 crio[2151]: time="2025-02-24 13:11:20.246860611Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=676a6dd3-2b91-4bd3-87a0-cc114fa13f0a name=/runtime.v1.RuntimeService/Version
	Feb 24 13:11:20 pause-290993 crio[2151]: time="2025-02-24 13:11:20.248143630Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=431243fc-3f40-4459-8b27-c5a927ae65b2 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 13:11:20 pause-290993 crio[2151]: time="2025-02-24 13:11:20.249035455Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1740402680248984935,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=431243fc-3f40-4459-8b27-c5a927ae65b2 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 13:11:20 pause-290993 crio[2151]: time="2025-02-24 13:11:20.249316978Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=74625584-3c84-4838-aced-f7e6604b4af9 name=/runtime.v1.RuntimeService/ListPodSandbox
	Feb 24 13:11:20 pause-290993 crio[2151]: time="2025-02-24 13:11:20.249549400Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:4249891088dffed2075269b139dfb29730981c76ef01c04ede491f4452cb1f69,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-sqwj8,Uid:216792f5-1104-4be5-bd91-c56ec040853c,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1740402633133247368,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-sqwj8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 216792f5-1104-4be5-bd91-c56ec040853c,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-24T13:10:08.962422036Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e52ce03b3fbbb35e449a0a060a747ceb518040bcd9301ff40c016f6a58405762,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-290993,Uid:c340e56a6c3ce70a38356e0ee1000e9c,Namespace:kube-system,
Attempt:1,},State:SANDBOX_READY,CreatedAt:1740402632974500068,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-290993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c340e56a6c3ce70a38356e0ee1000e9c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.181:8443,kubernetes.io/config.hash: c340e56a6c3ce70a38356e0ee1000e9c,kubernetes.io/config.seen: 2025-02-24T13:10:03.447012008Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a319254bf6791ae1dacb9c421789737d634cb354379f36a22949ee674ef31c91,Metadata:&PodSandboxMetadata{Name:kube-proxy-mk2vg,Uid:cae36757-e93e-4727-9ed4-f05ee8363e3f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1740402632954485600,Labels:map[string]string{controller-revision-hash: 7bb84c4984,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-mk2vg,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: cae36757-e93e-4727-9ed4-f05ee8363e3f,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-24T13:10:08.677878678Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ccd28a2937c5660c8415bf35ceb663a8d7c7e8bdc39289257d7be1759a4bcb37,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-290993,Uid:e6ad2b129b802c71f8413025523e947a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1740402632877927202,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-290993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ad2b129b802c71f8413025523e947a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e6ad2b129b802c71f8413025523e947a,kubernetes.io/config.seen: 2025-02-24T13:10:03.447013278Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox
{Id:b497e8fb3520b1d31b88e637daade81fce0cb024591c3fd6c9618a2cbe9b7c95,Metadata:&PodSandboxMetadata{Name:etcd-pause-290993,Uid:1b3d0156fc074bbb0220322372b6a858,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1740402632848971465,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-290993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b3d0156fc074bbb0220322372b6a858,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.181:2379,kubernetes.io/config.hash: 1b3d0156fc074bbb0220322372b6a858,kubernetes.io/config.seen: 2025-02-24T13:10:03.447006991Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:69cf6bbd69250f376bb5138f9ba2b5d74bf60ab62bdbccf923ffe526a7b62562,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-290993,Uid:0f6cd22ad0fde2cb33bbe8b1c4a5f91c,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1740402632825893363,Lab
els:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-290993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f6cd22ad0fde2cb33bbe8b1c4a5f91c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0f6cd22ad0fde2cb33bbe8b1c4a5f91c,kubernetes.io/config.seen: 2025-02-24T13:10:03.447014095Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=74625584-3c84-4838-aced-f7e6604b4af9 name=/runtime.v1.RuntimeService/ListPodSandbox
	Feb 24 13:11:20 pause-290993 crio[2151]: time="2025-02-24 13:11:20.250101391Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cb9455f7-3d0e-4e79-81a9-c1c03ec68ade name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:11:20 pause-290993 crio[2151]: time="2025-02-24 13:11:20.250156304Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cb9455f7-3d0e-4e79-81a9-c1c03ec68ade name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:11:20 pause-290993 crio[2151]: time="2025-02-24 13:11:20.250340419Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8bc430f1-f7d6-4c7d-b791-dc6b2e711de6 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:11:20 pause-290993 crio[2151]: time="2025-02-24 13:11:20.250387478Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8bc430f1-f7d6-4c7d-b791-dc6b2e711de6 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:11:20 pause-290993 crio[2151]: time="2025-02-24 13:11:20.250522355Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f8eafad8b80f5f834ac866ba0d91cb416b18b88038e42992066f312885fce532,PodSandboxId:a319254bf6791ae1dacb9c421789737d634cb354379f36a22949ee674ef31c91,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1740402661322676622,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mk2vg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cae36757-e93e-4727-9ed4-f05ee8363e3f,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a27b4fbb15791092457f489a9be8d91ddf82476ab6febce07f107071e2db5cd,PodSandboxId:4249891088dffed2075269b139dfb29730981c76ef01c04ede491f4452cb1f69,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1740402661303312710,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-sqwj8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 216792f5-1104-4be5-bd91-c56ec040853c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c271018fa2fb5d42d0f6f6dc99dcfddfff803db08293f738c60c8cb621ff670,PodSandboxId:69cf6bbd69250f376bb5138f9ba2b5d74bf60ab62bdbccf923ffe526a7b62562,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1740402657504389624,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-290993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f6cd22ad0f
de2cb33bbe8b1c4a5f91c,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b89c570f2f66d536871cd493c13ed4abe9852442af6b21ff0e0d0072895434aa,PodSandboxId:b497e8fb3520b1d31b88e637daade81fce0cb024591c3fd6c9618a2cbe9b7c95,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1740402657477598807,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-290993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b3d0156fc074bbb0220322372b6a858,},Annotations:map[string]s
tring{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:383050ee1a444fe799f2f435873bc41a638537b8728abb71472c282623fac7b5,PodSandboxId:e52ce03b3fbbb35e449a0a060a747ceb518040bcd9301ff40c016f6a58405762,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1740402657488728534,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-290993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c340e56a6c3ce70a38356e0ee1000e9c,},Annotations:map[string]string{io.kubernete
s.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29de9d999e399ffce5e959dcedd5d9cc494443ce36e3127e18f30c500e3de3fe,PodSandboxId:ccd28a2937c5660c8415bf35ceb663a8d7c7e8bdc39289257d7be1759a4bcb37,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1740402657458641736,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-290993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ad2b129b802c71f8413025523e947a,},Annotations:map[string]string{io.
kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f1ea71953e2c9b870aaeb5ec033589e6c70cf49214670b79afaab38ade0a6e7,PodSandboxId:4249891088dffed2075269b139dfb29730981c76ef01c04ede491f4452cb1f69,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1740402634300834244,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-sqwj8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 216792f5-1104-4be5-bd91-c56ec040853c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a2
04d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f69fc3df30b8e460e2137176d137957ab7add0f72b0fe68f02803a95f23ff915,PodSandboxId:a319254bf6791ae1dacb9c421789737d634cb354379f36a22949ee674ef31c91,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1740402633576570016,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-mk2vg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cae36757-e93e-4727-9ed4-f05ee8363e3f,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba98184a2895c8a63929c78ca71192f47dc2c5957bbe799ed35a24f9bfcd63eb,PodSandboxId:b497e8fb3520b1d31b88e637daade81fce0cb024591c3fd6c9618a2cbe9b7c95,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1740402633426032088,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-290993,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 1b3d0156fc074bbb0220322372b6a858,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1f36dc49e5c36127b0839d3e0b61bc18d838bb8a0c448b55d55f1a6191df34,PodSandboxId:e52ce03b3fbbb35e449a0a060a747ceb518040bcd9301ff40c016f6a58405762,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1740402633330735603,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-290993,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: c340e56a6c3ce70a38356e0ee1000e9c,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f1a789af555fca56718d4d1dbba6bd69970fc8e2158cfd4f1d4f49f36bfcfbc,PodSandboxId:ccd28a2937c5660c8415bf35ceb663a8d7c7e8bdc39289257d7be1759a4bcb37,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1740402633270459262,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-290993,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: e6ad2b129b802c71f8413025523e947a,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0eeb0c95859b126474282095e6479908c48c39791e874d93d6bb6eb25e0bbaa,PodSandboxId:69cf6bbd69250f376bb5138f9ba2b5d74bf60ab62bdbccf923ffe526a7b62562,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1740402633142679917,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-290993,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 0f6cd22ad0fde2cb33bbe8b1c4a5f91c,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cb9455f7-3d0e-4e79-81a9-c1c03ec68ade name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:11:20 pause-290993 crio[2151]: time="2025-02-24 13:11:20.250657114Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f8eafad8b80f5f834ac866ba0d91cb416b18b88038e42992066f312885fce532,PodSandboxId:a319254bf6791ae1dacb9c421789737d634cb354379f36a22949ee674ef31c91,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1740402661322676622,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mk2vg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cae36757-e93e-4727-9ed4-f05ee8363e3f,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a27b4fbb15791092457f489a9be8d91ddf82476ab6febce07f107071e2db5cd,PodSandboxId:4249891088dffed2075269b139dfb29730981c76ef01c04ede491f4452cb1f69,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1740402661303312710,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-sqwj8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 216792f5-1104-4be5-bd91-c56ec040853c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c271018fa2fb5d42d0f6f6dc99dcfddfff803db08293f738c60c8cb621ff670,PodSandboxId:69cf6bbd69250f376bb5138f9ba2b5d74bf60ab62bdbccf923ffe526a7b62562,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1740402657504389624,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-290993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f6cd22ad0f
de2cb33bbe8b1c4a5f91c,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b89c570f2f66d536871cd493c13ed4abe9852442af6b21ff0e0d0072895434aa,PodSandboxId:b497e8fb3520b1d31b88e637daade81fce0cb024591c3fd6c9618a2cbe9b7c95,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1740402657477598807,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-290993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b3d0156fc074bbb0220322372b6a858,},Annotations:map[string]s
tring{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:383050ee1a444fe799f2f435873bc41a638537b8728abb71472c282623fac7b5,PodSandboxId:e52ce03b3fbbb35e449a0a060a747ceb518040bcd9301ff40c016f6a58405762,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1740402657488728534,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-290993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c340e56a6c3ce70a38356e0ee1000e9c,},Annotations:map[string]string{io.kubernete
s.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29de9d999e399ffce5e959dcedd5d9cc494443ce36e3127e18f30c500e3de3fe,PodSandboxId:ccd28a2937c5660c8415bf35ceb663a8d7c7e8bdc39289257d7be1759a4bcb37,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1740402657458641736,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-290993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ad2b129b802c71f8413025523e947a,},Annotations:map[string]string{io.
kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f1ea71953e2c9b870aaeb5ec033589e6c70cf49214670b79afaab38ade0a6e7,PodSandboxId:4249891088dffed2075269b139dfb29730981c76ef01c04ede491f4452cb1f69,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1740402634300834244,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-sqwj8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 216792f5-1104-4be5-bd91-c56ec040853c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a2
04d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f69fc3df30b8e460e2137176d137957ab7add0f72b0fe68f02803a95f23ff915,PodSandboxId:a319254bf6791ae1dacb9c421789737d634cb354379f36a22949ee674ef31c91,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1740402633576570016,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-mk2vg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cae36757-e93e-4727-9ed4-f05ee8363e3f,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba98184a2895c8a63929c78ca71192f47dc2c5957bbe799ed35a24f9bfcd63eb,PodSandboxId:b497e8fb3520b1d31b88e637daade81fce0cb024591c3fd6c9618a2cbe9b7c95,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1740402633426032088,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-290993,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 1b3d0156fc074bbb0220322372b6a858,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1f36dc49e5c36127b0839d3e0b61bc18d838bb8a0c448b55d55f1a6191df34,PodSandboxId:e52ce03b3fbbb35e449a0a060a747ceb518040bcd9301ff40c016f6a58405762,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1740402633330735603,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-290993,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: c340e56a6c3ce70a38356e0ee1000e9c,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f1a789af555fca56718d4d1dbba6bd69970fc8e2158cfd4f1d4f49f36bfcfbc,PodSandboxId:ccd28a2937c5660c8415bf35ceb663a8d7c7e8bdc39289257d7be1759a4bcb37,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1740402633270459262,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-290993,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: e6ad2b129b802c71f8413025523e947a,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0eeb0c95859b126474282095e6479908c48c39791e874d93d6bb6eb25e0bbaa,PodSandboxId:69cf6bbd69250f376bb5138f9ba2b5d74bf60ab62bdbccf923ffe526a7b62562,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1740402633142679917,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-290993,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 0f6cd22ad0fde2cb33bbe8b1c4a5f91c,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8bc430f1-f7d6-4c7d-b791-dc6b2e711de6 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:11:20 pause-290993 crio[2151]: time="2025-02-24 13:11:20.300284309Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1750ba0f-f824-4269-b319-6ec01ea422e4 name=/runtime.v1.RuntimeService/Version
	Feb 24 13:11:20 pause-290993 crio[2151]: time="2025-02-24 13:11:20.300378808Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1750ba0f-f824-4269-b319-6ec01ea422e4 name=/runtime.v1.RuntimeService/Version
	Feb 24 13:11:20 pause-290993 crio[2151]: time="2025-02-24 13:11:20.301346711Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f004e399-2228-4ca9-a5c4-57e9969c5b3f name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 13:11:20 pause-290993 crio[2151]: time="2025-02-24 13:11:20.301780398Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1740402680301757390,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f004e399-2228-4ca9-a5c4-57e9969c5b3f name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 13:11:20 pause-290993 crio[2151]: time="2025-02-24 13:11:20.302309806Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9ae30f69-eff4-49ce-b8ea-0ecfec5809c0 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:11:20 pause-290993 crio[2151]: time="2025-02-24 13:11:20.302379398Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9ae30f69-eff4-49ce-b8ea-0ecfec5809c0 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:11:20 pause-290993 crio[2151]: time="2025-02-24 13:11:20.302638453Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f8eafad8b80f5f834ac866ba0d91cb416b18b88038e42992066f312885fce532,PodSandboxId:a319254bf6791ae1dacb9c421789737d634cb354379f36a22949ee674ef31c91,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1740402661322676622,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mk2vg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cae36757-e93e-4727-9ed4-f05ee8363e3f,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a27b4fbb15791092457f489a9be8d91ddf82476ab6febce07f107071e2db5cd,PodSandboxId:4249891088dffed2075269b139dfb29730981c76ef01c04ede491f4452cb1f69,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1740402661303312710,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-sqwj8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 216792f5-1104-4be5-bd91-c56ec040853c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c271018fa2fb5d42d0f6f6dc99dcfddfff803db08293f738c60c8cb621ff670,PodSandboxId:69cf6bbd69250f376bb5138f9ba2b5d74bf60ab62bdbccf923ffe526a7b62562,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1740402657504389624,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-290993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f6cd22ad0f
de2cb33bbe8b1c4a5f91c,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b89c570f2f66d536871cd493c13ed4abe9852442af6b21ff0e0d0072895434aa,PodSandboxId:b497e8fb3520b1d31b88e637daade81fce0cb024591c3fd6c9618a2cbe9b7c95,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1740402657477598807,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-290993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b3d0156fc074bbb0220322372b6a858,},Annotations:map[string]s
tring{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:383050ee1a444fe799f2f435873bc41a638537b8728abb71472c282623fac7b5,PodSandboxId:e52ce03b3fbbb35e449a0a060a747ceb518040bcd9301ff40c016f6a58405762,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1740402657488728534,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-290993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c340e56a6c3ce70a38356e0ee1000e9c,},Annotations:map[string]string{io.kubernete
s.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29de9d999e399ffce5e959dcedd5d9cc494443ce36e3127e18f30c500e3de3fe,PodSandboxId:ccd28a2937c5660c8415bf35ceb663a8d7c7e8bdc39289257d7be1759a4bcb37,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1740402657458641736,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-290993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ad2b129b802c71f8413025523e947a,},Annotations:map[string]string{io.
kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f1ea71953e2c9b870aaeb5ec033589e6c70cf49214670b79afaab38ade0a6e7,PodSandboxId:4249891088dffed2075269b139dfb29730981c76ef01c04ede491f4452cb1f69,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1740402634300834244,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-sqwj8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 216792f5-1104-4be5-bd91-c56ec040853c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a2
04d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f69fc3df30b8e460e2137176d137957ab7add0f72b0fe68f02803a95f23ff915,PodSandboxId:a319254bf6791ae1dacb9c421789737d634cb354379f36a22949ee674ef31c91,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1740402633576570016,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-mk2vg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cae36757-e93e-4727-9ed4-f05ee8363e3f,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba98184a2895c8a63929c78ca71192f47dc2c5957bbe799ed35a24f9bfcd63eb,PodSandboxId:b497e8fb3520b1d31b88e637daade81fce0cb024591c3fd6c9618a2cbe9b7c95,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1740402633426032088,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-290993,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 1b3d0156fc074bbb0220322372b6a858,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1f36dc49e5c36127b0839d3e0b61bc18d838bb8a0c448b55d55f1a6191df34,PodSandboxId:e52ce03b3fbbb35e449a0a060a747ceb518040bcd9301ff40c016f6a58405762,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1740402633330735603,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-290993,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: c340e56a6c3ce70a38356e0ee1000e9c,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f1a789af555fca56718d4d1dbba6bd69970fc8e2158cfd4f1d4f49f36bfcfbc,PodSandboxId:ccd28a2937c5660c8415bf35ceb663a8d7c7e8bdc39289257d7be1759a4bcb37,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1740402633270459262,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-290993,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: e6ad2b129b802c71f8413025523e947a,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0eeb0c95859b126474282095e6479908c48c39791e874d93d6bb6eb25e0bbaa,PodSandboxId:69cf6bbd69250f376bb5138f9ba2b5d74bf60ab62bdbccf923ffe526a7b62562,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1740402633142679917,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-290993,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 0f6cd22ad0fde2cb33bbe8b1c4a5f91c,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9ae30f69-eff4-49ce-b8ea-0ecfec5809c0 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:11:20 pause-290993 crio[2151]: time="2025-02-24 13:11:20.348076987Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5861f9ba-dcab-408f-8aad-3f468d4c743c name=/runtime.v1.RuntimeService/Version
	Feb 24 13:11:20 pause-290993 crio[2151]: time="2025-02-24 13:11:20.348176776Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5861f9ba-dcab-408f-8aad-3f468d4c743c name=/runtime.v1.RuntimeService/Version
	Feb 24 13:11:20 pause-290993 crio[2151]: time="2025-02-24 13:11:20.349528680Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ae350abf-feec-463c-8da8-b3b926901bc6 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 13:11:20 pause-290993 crio[2151]: time="2025-02-24 13:11:20.349944419Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1740402680349920734,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ae350abf-feec-463c-8da8-b3b926901bc6 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 13:11:20 pause-290993 crio[2151]: time="2025-02-24 13:11:20.350542748Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f0f302e6-7244-4dd3-bbdb-61ec27460152 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:11:20 pause-290993 crio[2151]: time="2025-02-24 13:11:20.350597669Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f0f302e6-7244-4dd3-bbdb-61ec27460152 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:11:20 pause-290993 crio[2151]: time="2025-02-24 13:11:20.350849517Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f8eafad8b80f5f834ac866ba0d91cb416b18b88038e42992066f312885fce532,PodSandboxId:a319254bf6791ae1dacb9c421789737d634cb354379f36a22949ee674ef31c91,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1740402661322676622,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mk2vg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cae36757-e93e-4727-9ed4-f05ee8363e3f,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a27b4fbb15791092457f489a9be8d91ddf82476ab6febce07f107071e2db5cd,PodSandboxId:4249891088dffed2075269b139dfb29730981c76ef01c04ede491f4452cb1f69,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1740402661303312710,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-sqwj8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 216792f5-1104-4be5-bd91-c56ec040853c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c271018fa2fb5d42d0f6f6dc99dcfddfff803db08293f738c60c8cb621ff670,PodSandboxId:69cf6bbd69250f376bb5138f9ba2b5d74bf60ab62bdbccf923ffe526a7b62562,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1740402657504389624,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-290993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f6cd22ad0f
de2cb33bbe8b1c4a5f91c,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b89c570f2f66d536871cd493c13ed4abe9852442af6b21ff0e0d0072895434aa,PodSandboxId:b497e8fb3520b1d31b88e637daade81fce0cb024591c3fd6c9618a2cbe9b7c95,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1740402657477598807,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-290993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b3d0156fc074bbb0220322372b6a858,},Annotations:map[string]s
tring{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:383050ee1a444fe799f2f435873bc41a638537b8728abb71472c282623fac7b5,PodSandboxId:e52ce03b3fbbb35e449a0a060a747ceb518040bcd9301ff40c016f6a58405762,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1740402657488728534,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-290993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c340e56a6c3ce70a38356e0ee1000e9c,},Annotations:map[string]string{io.kubernete
s.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29de9d999e399ffce5e959dcedd5d9cc494443ce36e3127e18f30c500e3de3fe,PodSandboxId:ccd28a2937c5660c8415bf35ceb663a8d7c7e8bdc39289257d7be1759a4bcb37,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1740402657458641736,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-290993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ad2b129b802c71f8413025523e947a,},Annotations:map[string]string{io.
kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f1ea71953e2c9b870aaeb5ec033589e6c70cf49214670b79afaab38ade0a6e7,PodSandboxId:4249891088dffed2075269b139dfb29730981c76ef01c04ede491f4452cb1f69,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1740402634300834244,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-sqwj8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 216792f5-1104-4be5-bd91-c56ec040853c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a2
04d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f69fc3df30b8e460e2137176d137957ab7add0f72b0fe68f02803a95f23ff915,PodSandboxId:a319254bf6791ae1dacb9c421789737d634cb354379f36a22949ee674ef31c91,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1740402633576570016,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-mk2vg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cae36757-e93e-4727-9ed4-f05ee8363e3f,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba98184a2895c8a63929c78ca71192f47dc2c5957bbe799ed35a24f9bfcd63eb,PodSandboxId:b497e8fb3520b1d31b88e637daade81fce0cb024591c3fd6c9618a2cbe9b7c95,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1740402633426032088,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-290993,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 1b3d0156fc074bbb0220322372b6a858,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1f36dc49e5c36127b0839d3e0b61bc18d838bb8a0c448b55d55f1a6191df34,PodSandboxId:e52ce03b3fbbb35e449a0a060a747ceb518040bcd9301ff40c016f6a58405762,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1740402633330735603,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-290993,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: c340e56a6c3ce70a38356e0ee1000e9c,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f1a789af555fca56718d4d1dbba6bd69970fc8e2158cfd4f1d4f49f36bfcfbc,PodSandboxId:ccd28a2937c5660c8415bf35ceb663a8d7c7e8bdc39289257d7be1759a4bcb37,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1740402633270459262,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-290993,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: e6ad2b129b802c71f8413025523e947a,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0eeb0c95859b126474282095e6479908c48c39791e874d93d6bb6eb25e0bbaa,PodSandboxId:69cf6bbd69250f376bb5138f9ba2b5d74bf60ab62bdbccf923ffe526a7b62562,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1740402633142679917,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-290993,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 0f6cd22ad0fde2cb33bbe8b1c4a5f91c,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f0f302e6-7244-4dd3-bbdb-61ec27460152 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f8eafad8b80f5       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   19 seconds ago      Running             kube-proxy                2                   a319254bf6791       kube-proxy-mk2vg
	5a27b4fbb1579       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   19 seconds ago      Running             coredns                   2                   4249891088dff       coredns-668d6bf9bc-sqwj8
	2c271018fa2fb       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   22 seconds ago      Running             kube-scheduler            2                   69cf6bbd69250       kube-scheduler-pause-290993
	383050ee1a444       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   22 seconds ago      Running             kube-apiserver            2                   e52ce03b3fbbb       kube-apiserver-pause-290993
	b89c570f2f66d       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   22 seconds ago      Running             etcd                      2                   b497e8fb3520b       etcd-pause-290993
	29de9d999e399       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   22 seconds ago      Running             kube-controller-manager   2                   ccd28a2937c56       kube-controller-manager-pause-290993
	8f1ea71953e2c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   46 seconds ago      Exited              coredns                   1                   4249891088dff       coredns-668d6bf9bc-sqwj8
	f69fc3df30b8e       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   46 seconds ago      Exited              kube-proxy                1                   a319254bf6791       kube-proxy-mk2vg
	ba98184a2895c       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   47 seconds ago      Exited              etcd                      1                   b497e8fb3520b       etcd-pause-290993
	ae1f36dc49e5c       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   47 seconds ago      Exited              kube-apiserver            1                   e52ce03b3fbbb       kube-apiserver-pause-290993
	5f1a789af555f       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   47 seconds ago      Exited              kube-controller-manager   1                   ccd28a2937c56       kube-controller-manager-pause-290993
	c0eeb0c95859b       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   47 seconds ago      Exited              kube-scheduler            1                   69cf6bbd69250       kube-scheduler-pause-290993
	
	
	==> coredns [5a27b4fbb15791092457f489a9be8d91ddf82476ab6febce07f107071e2db5cd] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:55164 - 37126 "HINFO IN 6678825283635531683.3778464112934933217. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012455731s
	
	
	==> coredns [8f1ea71953e2c9b870aaeb5ec033589e6c70cf49214670b79afaab38ade0a6e7] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:43817 - 12111 "HINFO IN 5517621473291706977.9065500291906123585. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015152925s
	
	
	==> describe nodes <==
	Name:               pause-290993
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-290993
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b76650f53499dbb51707efa4a87e94b72d747650
	                    minikube.k8s.io/name=pause-290993
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_24T13_10_04_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Feb 2025 13:10:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-290993
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Feb 2025 13:11:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Feb 2025 13:11:00 +0000   Mon, 24 Feb 2025 13:09:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Feb 2025 13:11:00 +0000   Mon, 24 Feb 2025 13:09:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Feb 2025 13:11:00 +0000   Mon, 24 Feb 2025 13:09:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Feb 2025 13:11:00 +0000   Mon, 24 Feb 2025 13:10:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.181
	  Hostname:    pause-290993
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 a42fa780a1954c298dac98a527ed2671
	  System UUID:                a42fa780-a195-4c29-8dac-98a527ed2671
	  Boot ID:                    506e35b5-4f60-4410-a994-e4f783351032
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-sqwj8                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     72s
	  kube-system                 etcd-pause-290993                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         77s
	  kube-system                 kube-apiserver-pause-290993             250m (12%)    0 (0%)      0 (0%)           0 (0%)         77s
	  kube-system                 kube-controller-manager-pause-290993    200m (10%)    0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-proxy-mk2vg                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         72s
	  kube-system                 kube-scheduler-pause-290993             100m (5%)     0 (0%)      0 (0%)           0 (0%)         77s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 70s                kube-proxy       
	  Normal  Starting                 18s                kube-proxy       
	  Normal  Starting                 43s                kube-proxy       
	  Normal  NodeHasNoDiskPressure    84s (x6 over 84s)  kubelet          Node pause-290993 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  84s (x7 over 84s)  kubelet          Node pause-290993 status is now: NodeHasSufficientMemory
	  Normal  Starting                 84s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     84s (x6 over 84s)  kubelet          Node pause-290993 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  84s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     77s                kubelet          Node pause-290993 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  77s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  77s                kubelet          Node pause-290993 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    77s                kubelet          Node pause-290993 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 77s                kubelet          Starting kubelet.
	  Normal  NodeReady                76s                kubelet          Node pause-290993 status is now: NodeReady
	  Normal  RegisteredNode           73s                node-controller  Node pause-290993 event: Registered Node pause-290993 in Controller
	  Normal  RegisteredNode           41s                node-controller  Node pause-290993 event: Registered Node pause-290993 in Controller
	  Normal  Starting                 24s                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    23s (x8 over 23s)  kubelet          Node pause-290993 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  23s (x8 over 23s)  kubelet          Node pause-290993 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)  kubelet          Node pause-290993 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           17s                node-controller  Node pause-290993 event: Registered Node pause-290993 in Controller
	
	
	==> dmesg <==
	[  +7.245316] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.064177] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075300] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.233713] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.146312] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.318258] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +4.416729] systemd-fstab-generator[748]: Ignoring "noauto" option for root device
	[  +0.061068] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.546452] systemd-fstab-generator[885]: Ignoring "noauto" option for root device
	[  +0.791377] kauditd_printk_skb: 46 callbacks suppressed
	[Feb24 13:10] systemd-fstab-generator[1248]: Ignoring "noauto" option for root device
	[  +0.180006] kauditd_printk_skb: 41 callbacks suppressed
	[  +4.926285] systemd-fstab-generator[1379]: Ignoring "noauto" option for root device
	[  +0.171707] kauditd_printk_skb: 21 callbacks suppressed
	[ +23.028860] systemd-fstab-generator[2073]: Ignoring "noauto" option for root device
	[  +0.075377] kauditd_printk_skb: 69 callbacks suppressed
	[  +0.067245] systemd-fstab-generator[2085]: Ignoring "noauto" option for root device
	[  +0.168852] systemd-fstab-generator[2099]: Ignoring "noauto" option for root device
	[  +0.144497] systemd-fstab-generator[2112]: Ignoring "noauto" option for root device
	[  +0.279329] systemd-fstab-generator[2140]: Ignoring "noauto" option for root device
	[  +0.748453] systemd-fstab-generator[2272]: Ignoring "noauto" option for root device
	[  +4.217246] kauditd_printk_skb: 195 callbacks suppressed
	[ +19.709038] systemd-fstab-generator[3196]: Ignoring "noauto" option for root device
	[Feb24 13:11] kauditd_printk_skb: 53 callbacks suppressed
	[  +5.350455] systemd-fstab-generator[3677]: Ignoring "noauto" option for root device
	
	
	==> etcd [b89c570f2f66d536871cd493c13ed4abe9852442af6b21ff0e0d0072895434aa] <==
	{"level":"info","ts":"2025-02-24T13:10:57.816602Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-02-24T13:10:57.816630Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-02-24T13:10:57.816636Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-02-24T13:10:57.817271Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.72.181:2380"}
	{"level":"info","ts":"2025-02-24T13:10:57.817308Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.72.181:2380"}
	{"level":"info","ts":"2025-02-24T13:10:57.817935Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b074c01599da0f0 switched to configuration voters=(11170980969397985520)"}
	{"level":"info","ts":"2025-02-24T13:10:57.817982Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"38acd6cd2f67011f","local-member-id":"9b074c01599da0f0","added-peer-id":"9b074c01599da0f0","added-peer-peer-urls":["https://192.168.72.181:2380"]}
	{"level":"info","ts":"2025-02-24T13:10:57.818061Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"38acd6cd2f67011f","local-member-id":"9b074c01599da0f0","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-24T13:10:57.818082Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-24T13:10:58.865290Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b074c01599da0f0 is starting a new election at term 3"}
	{"level":"info","ts":"2025-02-24T13:10:58.865407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b074c01599da0f0 became pre-candidate at term 3"}
	{"level":"info","ts":"2025-02-24T13:10:58.865456Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b074c01599da0f0 received MsgPreVoteResp from 9b074c01599da0f0 at term 3"}
	{"level":"info","ts":"2025-02-24T13:10:58.865487Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b074c01599da0f0 became candidate at term 4"}
	{"level":"info","ts":"2025-02-24T13:10:58.865508Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b074c01599da0f0 received MsgVoteResp from 9b074c01599da0f0 at term 4"}
	{"level":"info","ts":"2025-02-24T13:10:58.865530Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b074c01599da0f0 became leader at term 4"}
	{"level":"info","ts":"2025-02-24T13:10:58.865548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9b074c01599da0f0 elected leader 9b074c01599da0f0 at term 4"}
	{"level":"info","ts":"2025-02-24T13:10:58.868113Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"9b074c01599da0f0","local-member-attributes":"{Name:pause-290993 ClientURLs:[https://192.168.72.181:2379]}","request-path":"/0/members/9b074c01599da0f0/attributes","cluster-id":"38acd6cd2f67011f","publish-timeout":"7s"}
	{"level":"info","ts":"2025-02-24T13:10:58.868295Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-24T13:10:58.868347Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-24T13:10:58.868693Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-02-24T13:10:58.868729Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-02-24T13:10:58.869087Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-24T13:10:58.869523Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-24T13:10:58.870985Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-02-24T13:10:58.869747Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.181:2379"}
	
	
	==> etcd [ba98184a2895c8a63929c78ca71192f47dc2c5957bbe799ed35a24f9bfcd63eb] <==
	{"level":"info","ts":"2025-02-24T13:10:34.867354Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b074c01599da0f0 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-02-24T13:10:34.867382Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b074c01599da0f0 received MsgPreVoteResp from 9b074c01599da0f0 at term 2"}
	{"level":"info","ts":"2025-02-24T13:10:34.867395Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b074c01599da0f0 became candidate at term 3"}
	{"level":"info","ts":"2025-02-24T13:10:34.867404Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b074c01599da0f0 received MsgVoteResp from 9b074c01599da0f0 at term 3"}
	{"level":"info","ts":"2025-02-24T13:10:34.867412Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b074c01599da0f0 became leader at term 3"}
	{"level":"info","ts":"2025-02-24T13:10:34.867419Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9b074c01599da0f0 elected leader 9b074c01599da0f0 at term 3"}
	{"level":"info","ts":"2025-02-24T13:10:34.871433Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"9b074c01599da0f0","local-member-attributes":"{Name:pause-290993 ClientURLs:[https://192.168.72.181:2379]}","request-path":"/0/members/9b074c01599da0f0/attributes","cluster-id":"38acd6cd2f67011f","publish-timeout":"7s"}
	{"level":"info","ts":"2025-02-24T13:10:34.871561Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-24T13:10:34.871624Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-24T13:10:34.872510Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-24T13:10:34.875832Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.181:2379"}
	{"level":"info","ts":"2025-02-24T13:10:34.878485Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-02-24T13:10:34.878554Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-02-24T13:10:34.878945Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-24T13:10:34.879601Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-02-24T13:10:45.008167Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-02-24T13:10:45.008298Z","caller":"embed/etcd.go:378","msg":"closing etcd server","name":"pause-290993","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.181:2380"],"advertise-client-urls":["https://192.168.72.181:2379"]}
	{"level":"warn","ts":"2025-02-24T13:10:45.008377Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-24T13:10:45.008452Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-24T13:10:45.027902Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.72.181:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-24T13:10:45.028001Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.72.181:2379: use of closed network connection"}
	{"level":"info","ts":"2025-02-24T13:10:45.029541Z","caller":"etcdserver/server.go:1543","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9b074c01599da0f0","current-leader-member-id":"9b074c01599da0f0"}
	{"level":"info","ts":"2025-02-24T13:10:45.037157Z","caller":"embed/etcd.go:582","msg":"stopping serving peer traffic","address":"192.168.72.181:2380"}
	{"level":"info","ts":"2025-02-24T13:10:45.037690Z","caller":"embed/etcd.go:587","msg":"stopped serving peer traffic","address":"192.168.72.181:2380"}
	{"level":"info","ts":"2025-02-24T13:10:45.037738Z","caller":"embed/etcd.go:380","msg":"closed etcd server","name":"pause-290993","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.181:2380"],"advertise-client-urls":["https://192.168.72.181:2379"]}
	
	
	==> kernel <==
	 13:11:20 up 1 min,  0 users,  load average: 0.98, 0.40, 0.15
	Linux pause-290993 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [383050ee1a444fe799f2f435873bc41a638537b8728abb71472c282623fac7b5] <==
	I0224 13:11:00.328157       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0224 13:11:00.328407       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0224 13:11:00.328477       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0224 13:11:00.334848       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0224 13:11:00.334934       1 policy_source.go:240] refreshing policies
	I0224 13:11:00.349977       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0224 13:11:00.374638       1 shared_informer.go:320] Caches are synced for configmaps
	I0224 13:11:00.375082       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0224 13:11:00.375181       1 aggregator.go:171] initial CRD sync complete...
	I0224 13:11:00.375246       1 autoregister_controller.go:144] Starting autoregister controller
	I0224 13:11:00.375253       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0224 13:11:00.375259       1 cache.go:39] Caches are synced for autoregister controller
	I0224 13:11:00.378938       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0224 13:11:00.385138       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0224 13:11:00.385931       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E0224 13:11:00.398589       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0224 13:11:01.083057       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0224 13:11:01.179975       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0224 13:11:02.280046       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0224 13:11:02.335176       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0224 13:11:02.379120       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0224 13:11:02.395025       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0224 13:11:11.180023       1 controller.go:615] quota admission added evaluator for: endpoints
	I0224 13:11:11.181137       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0224 13:11:11.181550       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-apiserver [ae1f36dc49e5c36127b0839d3e0b61bc18d838bb8a0c448b55d55f1a6191df34] <==
	W0224 13:10:54.529296       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:10:54.546998       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:10:54.570140       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:10:54.604488       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:10:54.619341       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:10:54.633066       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:10:54.645820       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:10:54.666306       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:10:54.717133       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:10:54.732605       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:10:54.738463       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:10:54.791058       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:10:54.823828       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:10:54.840507       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:10:54.856343       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:10:54.920431       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:10:54.935127       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:10:54.946746       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:10:55.002983       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:10:55.013064       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:10:55.039734       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:10:55.087345       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:10:55.091957       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:10:55.093276       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:10:55.131852       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [29de9d999e399ffce5e959dcedd5d9cc494443ce36e3127e18f30c500e3de3fe] <==
	I0224 13:11:03.482454       1 shared_informer.go:320] Caches are synced for taint
	I0224 13:11:03.482671       1 shared_informer.go:320] Caches are synced for endpoint
	I0224 13:11:03.484178       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0224 13:11:03.485408       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-290993"
	I0224 13:11:03.485547       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0224 13:11:03.487318       1 shared_informer.go:320] Caches are synced for stateful set
	I0224 13:11:03.487415       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0224 13:11:03.487606       1 shared_informer.go:320] Caches are synced for garbage collector
	I0224 13:11:03.487675       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0224 13:11:03.487698       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0224 13:11:03.488031       1 shared_informer.go:320] Caches are synced for crt configmap
	I0224 13:11:03.490738       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0224 13:11:03.494535       1 shared_informer.go:320] Caches are synced for node
	I0224 13:11:03.494794       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0224 13:11:03.495002       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0224 13:11:03.495036       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0224 13:11:03.495144       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0224 13:11:03.495379       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-290993"
	I0224 13:11:03.495520       1 shared_informer.go:320] Caches are synced for TTL
	I0224 13:11:03.497431       1 shared_informer.go:320] Caches are synced for attach detach
	I0224 13:11:03.497877       1 shared_informer.go:320] Caches are synced for resource quota
	I0224 13:11:03.524449       1 shared_informer.go:320] Caches are synced for garbage collector
	I0224 13:11:03.525743       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0224 13:11:11.193424       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="29.72206ms"
	I0224 13:11:11.193873       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="64.666µs"
	
	
	==> kube-controller-manager [5f1a789af555fca56718d4d1dbba6bd69970fc8e2158cfd4f1d4f49f36bfcfbc] <==
	I0224 13:10:39.802508       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0224 13:10:39.804407       1 shared_informer.go:320] Caches are synced for endpoint
	I0224 13:10:39.805974       1 shared_informer.go:320] Caches are synced for persistent volume
	I0224 13:10:39.813339       1 shared_informer.go:320] Caches are synced for job
	I0224 13:10:39.813495       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0224 13:10:39.813635       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0224 13:10:39.813663       1 shared_informer.go:320] Caches are synced for crt configmap
	I0224 13:10:39.813759       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0224 13:10:39.813782       1 shared_informer.go:320] Caches are synced for HPA
	I0224 13:10:39.813870       1 shared_informer.go:320] Caches are synced for PVC protection
	I0224 13:10:39.813768       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0224 13:10:39.814023       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="62.077µs"
	I0224 13:10:39.821511       1 shared_informer.go:320] Caches are synced for resource quota
	I0224 13:10:39.824791       1 shared_informer.go:320] Caches are synced for node
	I0224 13:10:39.824858       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0224 13:10:39.824913       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0224 13:10:39.824919       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0224 13:10:39.824924       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0224 13:10:39.825006       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-290993"
	I0224 13:10:39.830506       1 shared_informer.go:320] Caches are synced for service account
	I0224 13:10:39.832878       1 shared_informer.go:320] Caches are synced for ephemeral
	I0224 13:10:39.835164       1 shared_informer.go:320] Caches are synced for garbage collector
	I0224 13:10:39.844421       1 shared_informer.go:320] Caches are synced for namespace
	I0224 13:10:39.846800       1 shared_informer.go:320] Caches are synced for GC
	I0224 13:10:44.979338       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="164.69µs"
	
	
	==> kube-proxy [f69fc3df30b8e460e2137176d137957ab7add0f72b0fe68f02803a95f23ff915] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0224 13:10:35.203367       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0224 13:10:36.730409       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.72.181"]
	E0224 13:10:36.730850       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0224 13:10:36.806024       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0224 13:10:36.806131       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0224 13:10:36.806169       1 server_linux.go:170] "Using iptables Proxier"
	I0224 13:10:36.808971       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0224 13:10:36.810019       1 server.go:497] "Version info" version="v1.32.2"
	I0224 13:10:36.810087       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0224 13:10:36.817890       1 config.go:105] "Starting endpoint slice config controller"
	I0224 13:10:36.817951       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0224 13:10:36.818022       1 config.go:199] "Starting service config controller"
	I0224 13:10:36.818047       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0224 13:10:36.834449       1 config.go:329] "Starting node config controller"
	I0224 13:10:36.834464       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0224 13:10:36.918733       1 shared_informer.go:320] Caches are synced for service config
	I0224 13:10:36.918953       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0224 13:10:36.934539       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [f8eafad8b80f5f834ac866ba0d91cb416b18b88038e42992066f312885fce532] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0224 13:11:01.596127       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0224 13:11:01.606916       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.72.181"]
	E0224 13:11:01.606992       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0224 13:11:01.653621       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0224 13:11:01.653684       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0224 13:11:01.653711       1 server_linux.go:170] "Using iptables Proxier"
	I0224 13:11:01.656596       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0224 13:11:01.656895       1 server.go:497] "Version info" version="v1.32.2"
	I0224 13:11:01.656925       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0224 13:11:01.658994       1 config.go:199] "Starting service config controller"
	I0224 13:11:01.659078       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0224 13:11:01.659118       1 config.go:105] "Starting endpoint slice config controller"
	I0224 13:11:01.659123       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0224 13:11:01.659543       1 config.go:329] "Starting node config controller"
	I0224 13:11:01.659576       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0224 13:11:01.759415       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0224 13:11:01.759512       1 shared_informer.go:320] Caches are synced for service config
	I0224 13:11:01.759693       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2c271018fa2fb5d42d0f6f6dc99dcfddfff803db08293f738c60c8cb621ff670] <==
	I0224 13:10:58.552055       1 serving.go:386] Generated self-signed cert in-memory
	W0224 13:11:00.278401       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0224 13:11:00.278582       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0224 13:11:00.278592       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0224 13:11:00.278599       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0224 13:11:00.373015       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0224 13:11:00.373119       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0224 13:11:00.377662       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0224 13:11:00.377963       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0224 13:11:00.378013       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0224 13:11:00.378043       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0224 13:11:00.478869       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [c0eeb0c95859b126474282095e6479908c48c39791e874d93d6bb6eb25e0bbaa] <==
	I0224 13:10:35.065607       1 serving.go:386] Generated self-signed cert in-memory
	W0224 13:10:36.568140       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0224 13:10:36.568447       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0224 13:10:36.568550       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0224 13:10:36.568658       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0224 13:10:36.635765       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0224 13:10:36.637740       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0224 13:10:36.644657       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0224 13:10:36.646394       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0224 13:10:36.651284       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0224 13:10:36.646419       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0224 13:10:36.752734       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0224 13:10:55.372901       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Feb 24 13:11:00 pause-290993 kubelet[3203]: E0224 13:11:00.160836    3203 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-290993\" not found" node="pause-290993"
	Feb 24 13:11:00 pause-290993 kubelet[3203]: I0224 13:11:00.298599    3203 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-290993"
	Feb 24 13:11:00 pause-290993 kubelet[3203]: E0224 13:11:00.388504    3203 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-290993\" already exists" pod="kube-system/kube-controller-manager-pause-290993"
	Feb 24 13:11:00 pause-290993 kubelet[3203]: I0224 13:11:00.388553    3203 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-290993"
	Feb 24 13:11:00 pause-290993 kubelet[3203]: E0224 13:11:00.404412    3203 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-290993\" already exists" pod="kube-system/kube-scheduler-pause-290993"
	Feb 24 13:11:00 pause-290993 kubelet[3203]: I0224 13:11:00.404596    3203 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-290993"
	Feb 24 13:11:00 pause-290993 kubelet[3203]: E0224 13:11:00.415345    3203 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-pause-290993\" already exists" pod="kube-system/etcd-pause-290993"
	Feb 24 13:11:00 pause-290993 kubelet[3203]: I0224 13:11:00.415393    3203 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-290993"
	Feb 24 13:11:00 pause-290993 kubelet[3203]: E0224 13:11:00.430409    3203 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-290993\" already exists" pod="kube-system/kube-apiserver-pause-290993"
	Feb 24 13:11:00 pause-290993 kubelet[3203]: I0224 13:11:00.433528    3203 kubelet_node_status.go:125] "Node was previously registered" node="pause-290993"
	Feb 24 13:11:00 pause-290993 kubelet[3203]: I0224 13:11:00.433631    3203 kubelet_node_status.go:79] "Successfully registered node" node="pause-290993"
	Feb 24 13:11:00 pause-290993 kubelet[3203]: I0224 13:11:00.433663    3203 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Feb 24 13:11:00 pause-290993 kubelet[3203]: I0224 13:11:00.434583    3203 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Feb 24 13:11:00 pause-290993 kubelet[3203]: I0224 13:11:00.967892    3203 apiserver.go:52] "Watching apiserver"
	Feb 24 13:11:00 pause-290993 kubelet[3203]: I0224 13:11:00.988606    3203 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Feb 24 13:11:01 pause-290993 kubelet[3203]: I0224 13:11:01.079152    3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cae36757-e93e-4727-9ed4-f05ee8363e3f-lib-modules\") pod \"kube-proxy-mk2vg\" (UID: \"cae36757-e93e-4727-9ed4-f05ee8363e3f\") " pod="kube-system/kube-proxy-mk2vg"
	Feb 24 13:11:01 pause-290993 kubelet[3203]: I0224 13:11:01.079277    3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cae36757-e93e-4727-9ed4-f05ee8363e3f-xtables-lock\") pod \"kube-proxy-mk2vg\" (UID: \"cae36757-e93e-4727-9ed4-f05ee8363e3f\") " pod="kube-system/kube-proxy-mk2vg"
	Feb 24 13:11:01 pause-290993 kubelet[3203]: I0224 13:11:01.274183    3203 scope.go:117] "RemoveContainer" containerID="8f1ea71953e2c9b870aaeb5ec033589e6c70cf49214670b79afaab38ade0a6e7"
	Feb 24 13:11:01 pause-290993 kubelet[3203]: I0224 13:11:01.276303    3203 scope.go:117] "RemoveContainer" containerID="f69fc3df30b8e460e2137176d137957ab7add0f72b0fe68f02803a95f23ff915"
	Feb 24 13:11:03 pause-290993 kubelet[3203]: I0224 13:11:03.182525    3203 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Feb 24 13:11:07 pause-290993 kubelet[3203]: E0224 13:11:07.117499    3203 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1740402667115717189,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 24 13:11:07 pause-290993 kubelet[3203]: E0224 13:11:07.117558    3203 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1740402667115717189,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 24 13:11:11 pause-290993 kubelet[3203]: I0224 13:11:11.124252    3203 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Feb 24 13:11:17 pause-290993 kubelet[3203]: E0224 13:11:17.119601    3203 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1740402677119086405,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 24 13:11:17 pause-290993 kubelet[3203]: E0224 13:11:17.119664    3203 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1740402677119086405,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-290993 -n pause-290993
helpers_test.go:261: (dbg) Run:  kubectl --context pause-290993 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-290993 -n pause-290993
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-290993 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-290993 logs -n 25: (1.393290743s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-799329 sudo                  | cilium-799329             | jenkins | v1.35.0 | 24 Feb 25 13:08 UTC |                     |
	|         | systemctl cat containerd               |                           |         |         |                     |                     |
	|         | --no-pager                             |                           |         |         |                     |                     |
	| ssh     | -p cilium-799329 sudo cat              | cilium-799329             | jenkins | v1.35.0 | 24 Feb 25 13:08 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-799329 sudo cat              | cilium-799329             | jenkins | v1.35.0 | 24 Feb 25 13:08 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| ssh     | -p cilium-799329 sudo                  | cilium-799329             | jenkins | v1.35.0 | 24 Feb 25 13:08 UTC |                     |
	|         | containerd config dump                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-799329 sudo                  | cilium-799329             | jenkins | v1.35.0 | 24 Feb 25 13:08 UTC |                     |
	|         | systemctl status crio --all            |                           |         |         |                     |                     |
	|         | --full --no-pager                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-799329 sudo                  | cilium-799329             | jenkins | v1.35.0 | 24 Feb 25 13:08 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p cilium-799329 sudo find             | cilium-799329             | jenkins | v1.35.0 | 24 Feb 25 13:08 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-799329 sudo crio             | cilium-799329             | jenkins | v1.35.0 | 24 Feb 25 13:08 UTC |                     |
	|         | config                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-799329                       | cilium-799329             | jenkins | v1.35.0 | 24 Feb 25 13:08 UTC | 24 Feb 25 13:08 UTC |
	| start   | -p kubernetes-upgrade-973775           | kubernetes-upgrade-973775 | jenkins | v1.35.0 | 24 Feb 25 13:08 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p offline-crio-226975                 | offline-crio-226975       | jenkins | v1.35.0 | 24 Feb 25 13:08 UTC | 24 Feb 25 13:08 UTC |
	| start   | -p pause-290993 --memory=2048          | pause-290993              | jenkins | v1.35.0 | 24 Feb 25 13:08 UTC | 24 Feb 25 13:10 UTC |
	|         | --install-addons=false                 |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2               |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-248837                 | NoKubernetes-248837       | jenkins | v1.35.0 | 24 Feb 25 13:08 UTC | 24 Feb 25 13:09 UTC |
	|         | --no-kubernetes --driver=kvm2          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p running-upgrade-271664              | running-upgrade-271664    | jenkins | v1.35.0 | 24 Feb 25 13:09 UTC | 24 Feb 25 13:11 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-248837                 | NoKubernetes-248837       | jenkins | v1.35.0 | 24 Feb 25 13:09 UTC | 24 Feb 25 13:09 UTC |
	| start   | -p NoKubernetes-248837                 | NoKubernetes-248837       | jenkins | v1.35.0 | 24 Feb 25 13:09 UTC | 24 Feb 25 13:10 UTC |
	|         | --no-kubernetes --driver=kvm2          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p pause-290993                        | pause-290993              | jenkins | v1.35.0 | 24 Feb 25 13:10 UTC | 24 Feb 25 13:11 UTC |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-248837 sudo            | NoKubernetes-248837       | jenkins | v1.35.0 | 24 Feb 25 13:10 UTC |                     |
	|         | systemctl is-active --quiet            |                           |         |         |                     |                     |
	|         | service kubelet                        |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-248837                 | NoKubernetes-248837       | jenkins | v1.35.0 | 24 Feb 25 13:10 UTC | 24 Feb 25 13:10 UTC |
	| start   | -p NoKubernetes-248837                 | NoKubernetes-248837       | jenkins | v1.35.0 | 24 Feb 25 13:10 UTC | 24 Feb 25 13:11 UTC |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-271664              | running-upgrade-271664    | jenkins | v1.35.0 | 24 Feb 25 13:11 UTC | 24 Feb 25 13:11 UTC |
	| start   | -p force-systemd-flag-705501           | force-systemd-flag-705501 | jenkins | v1.35.0 | 24 Feb 25 13:11 UTC |                     |
	|         | --memory=2048 --force-systemd          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-248837 sudo            | NoKubernetes-248837       | jenkins | v1.35.0 | 24 Feb 25 13:11 UTC |                     |
	|         | systemctl is-active --quiet            |                           |         |         |                     |                     |
	|         | service kubelet                        |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-248837                 | NoKubernetes-248837       | jenkins | v1.35.0 | 24 Feb 25 13:11 UTC | 24 Feb 25 13:11 UTC |
	| start   | -p cert-expiration-993480              | cert-expiration-993480    | jenkins | v1.35.0 | 24 Feb 25 13:11 UTC |                     |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/24 13:11:17
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0224 13:11:17.166257  934589 out.go:345] Setting OutFile to fd 1 ...
	I0224 13:11:17.166487  934589 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 13:11:17.166491  934589 out.go:358] Setting ErrFile to fd 2...
	I0224 13:11:17.166494  934589 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 13:11:17.166709  934589 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-887294/.minikube/bin
	I0224 13:11:17.167324  934589 out.go:352] Setting JSON to false
	I0224 13:11:17.168425  934589 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10418,"bootTime":1740392259,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0224 13:11:17.168524  934589 start.go:139] virtualization: kvm guest
	I0224 13:11:17.171025  934589 out.go:177] * [cert-expiration-993480] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0224 13:11:17.172583  934589 out.go:177]   - MINIKUBE_LOCATION=20451
	I0224 13:11:17.172612  934589 notify.go:220] Checking for updates...
	I0224 13:11:17.175170  934589 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 13:11:17.176837  934589 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20451-887294/kubeconfig
	I0224 13:11:17.178345  934589 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-887294/.minikube
	I0224 13:11:17.179808  934589 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0224 13:11:17.181135  934589 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0224 13:11:17.183142  934589 config.go:182] Loaded profile config "force-systemd-flag-705501": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0224 13:11:17.183295  934589 config.go:182] Loaded profile config "kubernetes-upgrade-973775": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0224 13:11:17.183465  934589 config.go:182] Loaded profile config "pause-290993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0224 13:11:17.183606  934589 driver.go:394] Setting default libvirt URI to qemu:///system
	I0224 13:11:17.222934  934589 out.go:177] * Using the kvm2 driver based on user configuration
	I0224 13:11:17.224501  934589 start.go:297] selected driver: kvm2
	I0224 13:11:17.224515  934589 start.go:901] validating driver "kvm2" against <nil>
	I0224 13:11:17.224527  934589 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0224 13:11:17.225323  934589 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 13:11:17.225410  934589 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20451-887294/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0224 13:11:17.243984  934589 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0224 13:11:17.244028  934589 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0224 13:11:17.244297  934589 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0224 13:11:17.244330  934589 cni.go:84] Creating CNI manager for ""
	I0224 13:11:17.244370  934589 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0224 13:11:17.244376  934589 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0224 13:11:17.244446  934589 start.go:340] cluster config:
	{Name:cert-expiration-993480 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:cert-expiration-993480 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0224 13:11:17.244582  934589 iso.go:125] acquiring lock: {Name:mk57408cca66a96a13d93cda43cdfac6e61aef3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 13:11:17.249767  934589 out.go:177] * Starting "cert-expiration-993480" primary control-plane node in "cert-expiration-993480" cluster
	I0224 13:11:16.487537  933673 addons.go:514] duration metric: took 3.740369ms for enable addons: enabled=[]
	I0224 13:11:16.488454  933673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 13:11:16.679569  933673 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0224 13:11:16.700972  933673 node_ready.go:35] waiting up to 6m0s for node "pause-290993" to be "Ready" ...
	I0224 13:11:16.703818  933673 node_ready.go:49] node "pause-290993" has status "Ready":"True"
	I0224 13:11:16.703858  933673 node_ready.go:38] duration metric: took 2.823027ms for node "pause-290993" to be "Ready" ...
	I0224 13:11:16.703872  933673 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 13:11:16.706865  933673 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-sqwj8" in "kube-system" namespace to be "Ready" ...
	I0224 13:11:16.713069  933673 pod_ready.go:93] pod "coredns-668d6bf9bc-sqwj8" in "kube-system" namespace has status "Ready":"True"
	I0224 13:11:16.713094  933673 pod_ready.go:82] duration metric: took 6.192977ms for pod "coredns-668d6bf9bc-sqwj8" in "kube-system" namespace to be "Ready" ...
	I0224 13:11:16.713104  933673 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-290993" in "kube-system" namespace to be "Ready" ...
	I0224 13:11:17.031260  933673 pod_ready.go:93] pod "etcd-pause-290993" in "kube-system" namespace has status "Ready":"True"
	I0224 13:11:17.031294  933673 pod_ready.go:82] duration metric: took 318.182683ms for pod "etcd-pause-290993" in "kube-system" namespace to be "Ready" ...
	I0224 13:11:17.031311  933673 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-290993" in "kube-system" namespace to be "Ready" ...
	I0224 13:11:17.429529  933673 pod_ready.go:93] pod "kube-apiserver-pause-290993" in "kube-system" namespace has status "Ready":"True"
	I0224 13:11:17.429562  933673 pod_ready.go:82] duration metric: took 398.239067ms for pod "kube-apiserver-pause-290993" in "kube-system" namespace to be "Ready" ...
	I0224 13:11:17.429578  933673 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-290993" in "kube-system" namespace to be "Ready" ...
	I0224 13:11:17.829302  933673 pod_ready.go:93] pod "kube-controller-manager-pause-290993" in "kube-system" namespace has status "Ready":"True"
	I0224 13:11:17.829360  933673 pod_ready.go:82] duration metric: took 399.774384ms for pod "kube-controller-manager-pause-290993" in "kube-system" namespace to be "Ready" ...
	I0224 13:11:17.829374  933673 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mk2vg" in "kube-system" namespace to be "Ready" ...
	I0224 13:11:14.121503  934320 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0224 13:11:14.121793  934320 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:11:14.121873  934320 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:11:14.139268  934320 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41663
	I0224 13:11:14.139733  934320 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:11:14.140428  934320 main.go:141] libmachine: Using API Version  1
	I0224 13:11:14.140449  934320 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:11:14.140867  934320 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:11:14.141124  934320 main.go:141] libmachine: (force-systemd-flag-705501) Calling .GetMachineName
	I0224 13:11:14.141349  934320 main.go:141] libmachine: (force-systemd-flag-705501) Calling .DriverName
	I0224 13:11:14.141560  934320 start.go:159] libmachine.API.Create for "force-systemd-flag-705501" (driver="kvm2")
	I0224 13:11:14.141610  934320 client.go:168] LocalClient.Create starting
	I0224 13:11:14.141659  934320 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem
	I0224 13:11:14.141711  934320 main.go:141] libmachine: Decoding PEM data...
	I0224 13:11:14.141731  934320 main.go:141] libmachine: Parsing certificate...
	I0224 13:11:14.141812  934320 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20451-887294/.minikube/certs/cert.pem
	I0224 13:11:14.141841  934320 main.go:141] libmachine: Decoding PEM data...
	I0224 13:11:14.141862  934320 main.go:141] libmachine: Parsing certificate...
	I0224 13:11:14.141890  934320 main.go:141] libmachine: Running pre-create checks...
	I0224 13:11:14.141906  934320 main.go:141] libmachine: (force-systemd-flag-705501) Calling .PreCreateCheck
	I0224 13:11:14.142391  934320 main.go:141] libmachine: (force-systemd-flag-705501) Calling .GetConfigRaw
	I0224 13:11:14.142858  934320 main.go:141] libmachine: Creating machine...
	I0224 13:11:14.142873  934320 main.go:141] libmachine: (force-systemd-flag-705501) Calling .Create
	I0224 13:11:14.143012  934320 main.go:141] libmachine: (force-systemd-flag-705501) creating KVM machine...
	I0224 13:11:14.143037  934320 main.go:141] libmachine: (force-systemd-flag-705501) creating network...
	I0224 13:11:14.144496  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | found existing default KVM network
	I0224 13:11:14.146225  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | I0224 13:11:14.146056  934382 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000266180}
	I0224 13:11:14.146251  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | created network xml: 
	I0224 13:11:14.146272  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | <network>
	I0224 13:11:14.146281  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG |   <name>mk-force-systemd-flag-705501</name>
	I0224 13:11:14.146291  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG |   <dns enable='no'/>
	I0224 13:11:14.146309  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG |   
	I0224 13:11:14.146324  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0224 13:11:14.146339  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG |     <dhcp>
	I0224 13:11:14.146354  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0224 13:11:14.146365  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG |     </dhcp>
	I0224 13:11:14.146377  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG |   </ip>
	I0224 13:11:14.146387  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG |   
	I0224 13:11:14.146395  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | </network>
	I0224 13:11:14.146405  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | 
	I0224 13:11:14.152234  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | trying to create private KVM network mk-force-systemd-flag-705501 192.168.39.0/24...
	I0224 13:11:14.232218  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | private KVM network mk-force-systemd-flag-705501 192.168.39.0/24 created
	I0224 13:11:14.232312  934320 main.go:141] libmachine: (force-systemd-flag-705501) setting up store path in /home/jenkins/minikube-integration/20451-887294/.minikube/machines/force-systemd-flag-705501 ...
	I0224 13:11:14.232342  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | I0224 13:11:14.232227  934382 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20451-887294/.minikube
	I0224 13:11:14.232360  934320 main.go:141] libmachine: (force-systemd-flag-705501) building disk image from file:///home/jenkins/minikube-integration/20451-887294/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0224 13:11:14.232469  934320 main.go:141] libmachine: (force-systemd-flag-705501) Downloading /home/jenkins/minikube-integration/20451-887294/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20451-887294/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0224 13:11:14.534873  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | I0224 13:11:14.534682  934382 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20451-887294/.minikube/machines/force-systemd-flag-705501/id_rsa...
	I0224 13:11:14.745366  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | I0224 13:11:14.745208  934382 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20451-887294/.minikube/machines/force-systemd-flag-705501/force-systemd-flag-705501.rawdisk...
	I0224 13:11:14.745399  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | Writing magic tar header
	I0224 13:11:14.745415  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | Writing SSH key tar header
	I0224 13:11:14.745430  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | I0224 13:11:14.745384  934382 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20451-887294/.minikube/machines/force-systemd-flag-705501 ...
	I0224 13:11:14.745571  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20451-887294/.minikube/machines/force-systemd-flag-705501
	I0224 13:11:14.745603  934320 main.go:141] libmachine: (force-systemd-flag-705501) setting executable bit set on /home/jenkins/minikube-integration/20451-887294/.minikube/machines/force-systemd-flag-705501 (perms=drwx------)
	I0224 13:11:14.745635  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20451-887294/.minikube/machines
	I0224 13:11:14.745656  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20451-887294/.minikube
	I0224 13:11:14.745669  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20451-887294
	I0224 13:11:14.745683  934320 main.go:141] libmachine: (force-systemd-flag-705501) setting executable bit set on /home/jenkins/minikube-integration/20451-887294/.minikube/machines (perms=drwxr-xr-x)
	I0224 13:11:14.745700  934320 main.go:141] libmachine: (force-systemd-flag-705501) setting executable bit set on /home/jenkins/minikube-integration/20451-887294/.minikube (perms=drwxr-xr-x)
	I0224 13:11:14.745712  934320 main.go:141] libmachine: (force-systemd-flag-705501) setting executable bit set on /home/jenkins/minikube-integration/20451-887294 (perms=drwxrwxr-x)
	I0224 13:11:14.745724  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0224 13:11:14.745737  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | checking permissions on dir: /home/jenkins
	I0224 13:11:14.745746  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | checking permissions on dir: /home
	I0224 13:11:14.745758  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | skipping /home - not owner
	I0224 13:11:14.745769  934320 main.go:141] libmachine: (force-systemd-flag-705501) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0224 13:11:14.745783  934320 main.go:141] libmachine: (force-systemd-flag-705501) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0224 13:11:14.745792  934320 main.go:141] libmachine: (force-systemd-flag-705501) creating domain...
	I0224 13:11:14.747007  934320 main.go:141] libmachine: (force-systemd-flag-705501) define libvirt domain using xml: 
	I0224 13:11:14.747029  934320 main.go:141] libmachine: (force-systemd-flag-705501) <domain type='kvm'>
	I0224 13:11:14.747039  934320 main.go:141] libmachine: (force-systemd-flag-705501)   <name>force-systemd-flag-705501</name>
	I0224 13:11:14.747047  934320 main.go:141] libmachine: (force-systemd-flag-705501)   <memory unit='MiB'>2048</memory>
	I0224 13:11:14.747055  934320 main.go:141] libmachine: (force-systemd-flag-705501)   <vcpu>2</vcpu>
	I0224 13:11:14.747062  934320 main.go:141] libmachine: (force-systemd-flag-705501)   <features>
	I0224 13:11:14.747069  934320 main.go:141] libmachine: (force-systemd-flag-705501)     <acpi/>
	I0224 13:11:14.747076  934320 main.go:141] libmachine: (force-systemd-flag-705501)     <apic/>
	I0224 13:11:14.747086  934320 main.go:141] libmachine: (force-systemd-flag-705501)     <pae/>
	I0224 13:11:14.747094  934320 main.go:141] libmachine: (force-systemd-flag-705501)     
	I0224 13:11:14.747105  934320 main.go:141] libmachine: (force-systemd-flag-705501)   </features>
	I0224 13:11:14.747116  934320 main.go:141] libmachine: (force-systemd-flag-705501)   <cpu mode='host-passthrough'>
	I0224 13:11:14.747125  934320 main.go:141] libmachine: (force-systemd-flag-705501)   
	I0224 13:11:14.747132  934320 main.go:141] libmachine: (force-systemd-flag-705501)   </cpu>
	I0224 13:11:14.747198  934320 main.go:141] libmachine: (force-systemd-flag-705501)   <os>
	I0224 13:11:14.747224  934320 main.go:141] libmachine: (force-systemd-flag-705501)     <type>hvm</type>
	I0224 13:11:14.747237  934320 main.go:141] libmachine: (force-systemd-flag-705501)     <boot dev='cdrom'/>
	I0224 13:11:14.747249  934320 main.go:141] libmachine: (force-systemd-flag-705501)     <boot dev='hd'/>
	I0224 13:11:14.747262  934320 main.go:141] libmachine: (force-systemd-flag-705501)     <bootmenu enable='no'/>
	I0224 13:11:14.747272  934320 main.go:141] libmachine: (force-systemd-flag-705501)   </os>
	I0224 13:11:14.747283  934320 main.go:141] libmachine: (force-systemd-flag-705501)   <devices>
	I0224 13:11:14.747298  934320 main.go:141] libmachine: (force-systemd-flag-705501)     <disk type='file' device='cdrom'>
	I0224 13:11:14.747321  934320 main.go:141] libmachine: (force-systemd-flag-705501)       <source file='/home/jenkins/minikube-integration/20451-887294/.minikube/machines/force-systemd-flag-705501/boot2docker.iso'/>
	I0224 13:11:14.747335  934320 main.go:141] libmachine: (force-systemd-flag-705501)       <target dev='hdc' bus='scsi'/>
	I0224 13:11:14.747346  934320 main.go:141] libmachine: (force-systemd-flag-705501)       <readonly/>
	I0224 13:11:14.747357  934320 main.go:141] libmachine: (force-systemd-flag-705501)     </disk>
	I0224 13:11:14.747368  934320 main.go:141] libmachine: (force-systemd-flag-705501)     <disk type='file' device='disk'>
	I0224 13:11:14.747382  934320 main.go:141] libmachine: (force-systemd-flag-705501)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0224 13:11:14.747406  934320 main.go:141] libmachine: (force-systemd-flag-705501)       <source file='/home/jenkins/minikube-integration/20451-887294/.minikube/machines/force-systemd-flag-705501/force-systemd-flag-705501.rawdisk'/>
	I0224 13:11:14.747419  934320 main.go:141] libmachine: (force-systemd-flag-705501)       <target dev='hda' bus='virtio'/>
	I0224 13:11:14.747429  934320 main.go:141] libmachine: (force-systemd-flag-705501)     </disk>
	I0224 13:11:14.747438  934320 main.go:141] libmachine: (force-systemd-flag-705501)     <interface type='network'>
	I0224 13:11:14.747450  934320 main.go:141] libmachine: (force-systemd-flag-705501)       <source network='mk-force-systemd-flag-705501'/>
	I0224 13:11:14.747463  934320 main.go:141] libmachine: (force-systemd-flag-705501)       <model type='virtio'/>
	I0224 13:11:14.747477  934320 main.go:141] libmachine: (force-systemd-flag-705501)     </interface>
	I0224 13:11:14.747490  934320 main.go:141] libmachine: (force-systemd-flag-705501)     <interface type='network'>
	I0224 13:11:14.747501  934320 main.go:141] libmachine: (force-systemd-flag-705501)       <source network='default'/>
	I0224 13:11:14.747518  934320 main.go:141] libmachine: (force-systemd-flag-705501)       <model type='virtio'/>
	I0224 13:11:14.747529  934320 main.go:141] libmachine: (force-systemd-flag-705501)     </interface>
	I0224 13:11:14.747538  934320 main.go:141] libmachine: (force-systemd-flag-705501)     <serial type='pty'>
	I0224 13:11:14.747552  934320 main.go:141] libmachine: (force-systemd-flag-705501)       <target port='0'/>
	I0224 13:11:14.747565  934320 main.go:141] libmachine: (force-systemd-flag-705501)     </serial>
	I0224 13:11:14.747575  934320 main.go:141] libmachine: (force-systemd-flag-705501)     <console type='pty'>
	I0224 13:11:14.747587  934320 main.go:141] libmachine: (force-systemd-flag-705501)       <target type='serial' port='0'/>
	I0224 13:11:14.747596  934320 main.go:141] libmachine: (force-systemd-flag-705501)     </console>
	I0224 13:11:14.747608  934320 main.go:141] libmachine: (force-systemd-flag-705501)     <rng model='virtio'>
	I0224 13:11:14.747620  934320 main.go:141] libmachine: (force-systemd-flag-705501)       <backend model='random'>/dev/random</backend>
	I0224 13:11:14.747639  934320 main.go:141] libmachine: (force-systemd-flag-705501)     </rng>
	I0224 13:11:14.747648  934320 main.go:141] libmachine: (force-systemd-flag-705501)     
	I0224 13:11:14.747659  934320 main.go:141] libmachine: (force-systemd-flag-705501)     
	I0224 13:11:14.747668  934320 main.go:141] libmachine: (force-systemd-flag-705501)   </devices>
	I0224 13:11:14.747680  934320 main.go:141] libmachine: (force-systemd-flag-705501) </domain>
	I0224 13:11:14.747694  934320 main.go:141] libmachine: (force-systemd-flag-705501) 
	I0224 13:11:14.751975  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | domain force-systemd-flag-705501 has defined MAC address 52:54:00:ba:d0:37 in network default
	I0224 13:11:14.752651  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | domain force-systemd-flag-705501 has defined MAC address 52:54:00:b5:e9:60 in network mk-force-systemd-flag-705501
	I0224 13:11:14.752669  934320 main.go:141] libmachine: (force-systemd-flag-705501) starting domain...
	I0224 13:11:14.752680  934320 main.go:141] libmachine: (force-systemd-flag-705501) ensuring networks are active...
	I0224 13:11:14.753381  934320 main.go:141] libmachine: (force-systemd-flag-705501) Ensuring network default is active
	I0224 13:11:14.753732  934320 main.go:141] libmachine: (force-systemd-flag-705501) Ensuring network mk-force-systemd-flag-705501 is active
	I0224 13:11:14.754339  934320 main.go:141] libmachine: (force-systemd-flag-705501) getting domain XML...
	I0224 13:11:14.755153  934320 main.go:141] libmachine: (force-systemd-flag-705501) creating domain...
	I0224 13:11:16.049804  934320 main.go:141] libmachine: (force-systemd-flag-705501) waiting for IP...
	I0224 13:11:16.050954  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | domain force-systemd-flag-705501 has defined MAC address 52:54:00:b5:e9:60 in network mk-force-systemd-flag-705501
	I0224 13:11:16.051522  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | unable to find current IP address of domain force-systemd-flag-705501 in network mk-force-systemd-flag-705501
	I0224 13:11:16.051586  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | I0224 13:11:16.051520  934382 retry.go:31] will retry after 250.475537ms: waiting for domain to come up
	I0224 13:11:16.304135  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | domain force-systemd-flag-705501 has defined MAC address 52:54:00:b5:e9:60 in network mk-force-systemd-flag-705501
	I0224 13:11:16.304688  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | unable to find current IP address of domain force-systemd-flag-705501 in network mk-force-systemd-flag-705501
	I0224 13:11:16.304741  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | I0224 13:11:16.304670  934382 retry.go:31] will retry after 239.587801ms: waiting for domain to come up
	I0224 13:11:16.547428  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | domain force-systemd-flag-705501 has defined MAC address 52:54:00:b5:e9:60 in network mk-force-systemd-flag-705501
	I0224 13:11:16.548116  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | unable to find current IP address of domain force-systemd-flag-705501 in network mk-force-systemd-flag-705501
	I0224 13:11:16.548144  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | I0224 13:11:16.548012  934382 retry.go:31] will retry after 447.505277ms: waiting for domain to come up
	I0224 13:11:17.040525  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | domain force-systemd-flag-705501 has defined MAC address 52:54:00:b5:e9:60 in network mk-force-systemd-flag-705501
	I0224 13:11:17.041172  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | unable to find current IP address of domain force-systemd-flag-705501 in network mk-force-systemd-flag-705501
	I0224 13:11:17.041200  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | I0224 13:11:17.041137  934382 retry.go:31] will retry after 485.215487ms: waiting for domain to come up
	I0224 13:11:17.528102  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | domain force-systemd-flag-705501 has defined MAC address 52:54:00:b5:e9:60 in network mk-force-systemd-flag-705501
	I0224 13:11:17.528634  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | unable to find current IP address of domain force-systemd-flag-705501 in network mk-force-systemd-flag-705501
	I0224 13:11:17.528693  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | I0224 13:11:17.528622  934382 retry.go:31] will retry after 480.479367ms: waiting for domain to come up
	I0224 13:11:18.010216  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | domain force-systemd-flag-705501 has defined MAC address 52:54:00:b5:e9:60 in network mk-force-systemd-flag-705501
	I0224 13:11:18.010747  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | unable to find current IP address of domain force-systemd-flag-705501 in network mk-force-systemd-flag-705501
	I0224 13:11:18.010785  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | I0224 13:11:18.010723  934382 retry.go:31] will retry after 651.884594ms: waiting for domain to come up
	I0224 13:11:18.664609  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | domain force-systemd-flag-705501 has defined MAC address 52:54:00:b5:e9:60 in network mk-force-systemd-flag-705501
	I0224 13:11:18.665085  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | unable to find current IP address of domain force-systemd-flag-705501 in network mk-force-systemd-flag-705501
	I0224 13:11:18.665138  934320 main.go:141] libmachine: (force-systemd-flag-705501) DBG | I0224 13:11:18.665061  934382 retry.go:31] will retry after 757.358789ms: waiting for domain to come up
	I0224 13:11:18.229888  933673 pod_ready.go:93] pod "kube-proxy-mk2vg" in "kube-system" namespace has status "Ready":"True"
	I0224 13:11:18.229913  933673 pod_ready.go:82] duration metric: took 400.533344ms for pod "kube-proxy-mk2vg" in "kube-system" namespace to be "Ready" ...
	I0224 13:11:18.229925  933673 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-290993" in "kube-system" namespace to be "Ready" ...
	I0224 13:11:18.629662  933673 pod_ready.go:93] pod "kube-scheduler-pause-290993" in "kube-system" namespace has status "Ready":"True"
	I0224 13:11:18.629700  933673 pod_ready.go:82] duration metric: took 399.766835ms for pod "kube-scheduler-pause-290993" in "kube-system" namespace to be "Ready" ...
	I0224 13:11:18.629713  933673 pod_ready.go:39] duration metric: took 1.925819677s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 13:11:18.629736  933673 api_server.go:52] waiting for apiserver process to appear ...
	I0224 13:11:18.629811  933673 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:11:18.645301  933673 api_server.go:72] duration metric: took 2.161585914s to wait for apiserver process to appear ...
	I0224 13:11:18.645353  933673 api_server.go:88] waiting for apiserver healthz status ...
	I0224 13:11:18.645382  933673 api_server.go:253] Checking apiserver healthz at https://192.168.72.181:8443/healthz ...
	I0224 13:11:18.652414  933673 api_server.go:279] https://192.168.72.181:8443/healthz returned 200:
	ok
	I0224 13:11:18.653383  933673 api_server.go:141] control plane version: v1.32.2
	I0224 13:11:18.653405  933673 api_server.go:131] duration metric: took 8.044526ms to wait for apiserver health ...
	I0224 13:11:18.653413  933673 system_pods.go:43] waiting for kube-system pods to appear ...
	I0224 13:11:18.829296  933673 system_pods.go:59] 6 kube-system pods found
	I0224 13:11:18.829358  933673 system_pods.go:61] "coredns-668d6bf9bc-sqwj8" [216792f5-1104-4be5-bd91-c56ec040853c] Running
	I0224 13:11:18.829365  933673 system_pods.go:61] "etcd-pause-290993" [abfc9069-0ed5-4b71-b6e5-13aabd1a0394] Running
	I0224 13:11:18.829368  933673 system_pods.go:61] "kube-apiserver-pause-290993" [8a1a789e-616c-42a7-944b-72e626dc0dee] Running
	I0224 13:11:18.829372  933673 system_pods.go:61] "kube-controller-manager-pause-290993" [8d523d7a-0768-4c4c-bc94-76f57bdd4e09] Running
	I0224 13:11:18.829377  933673 system_pods.go:61] "kube-proxy-mk2vg" [cae36757-e93e-4727-9ed4-f05ee8363e3f] Running
	I0224 13:11:18.829380  933673 system_pods.go:61] "kube-scheduler-pause-290993" [86443e75-b6ca-442b-801a-0ec5e6e49621] Running
	I0224 13:11:18.829386  933673 system_pods.go:74] duration metric: took 175.967159ms to wait for pod list to return data ...
	I0224 13:11:18.829393  933673 default_sa.go:34] waiting for default service account to be created ...
	I0224 13:11:19.028934  933673 default_sa.go:45] found service account: "default"
	I0224 13:11:19.028968  933673 default_sa.go:55] duration metric: took 199.569032ms for default service account to be created ...
	I0224 13:11:19.028980  933673 system_pods.go:116] waiting for k8s-apps to be running ...
	I0224 13:11:19.230408  933673 system_pods.go:86] 6 kube-system pods found
	I0224 13:11:19.230455  933673 system_pods.go:89] "coredns-668d6bf9bc-sqwj8" [216792f5-1104-4be5-bd91-c56ec040853c] Running
	I0224 13:11:19.230464  933673 system_pods.go:89] "etcd-pause-290993" [abfc9069-0ed5-4b71-b6e5-13aabd1a0394] Running
	I0224 13:11:19.230471  933673 system_pods.go:89] "kube-apiserver-pause-290993" [8a1a789e-616c-42a7-944b-72e626dc0dee] Running
	I0224 13:11:19.230477  933673 system_pods.go:89] "kube-controller-manager-pause-290993" [8d523d7a-0768-4c4c-bc94-76f57bdd4e09] Running
	I0224 13:11:19.230482  933673 system_pods.go:89] "kube-proxy-mk2vg" [cae36757-e93e-4727-9ed4-f05ee8363e3f] Running
	I0224 13:11:19.230490  933673 system_pods.go:89] "kube-scheduler-pause-290993" [86443e75-b6ca-442b-801a-0ec5e6e49621] Running
	I0224 13:11:19.230516  933673 system_pods.go:126] duration metric: took 201.528433ms to wait for k8s-apps to be running ...
	I0224 13:11:19.230527  933673 system_svc.go:44] waiting for kubelet service to be running ....
	I0224 13:11:19.230590  933673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 13:11:19.249909  933673 system_svc.go:56] duration metric: took 19.371763ms WaitForService to wait for kubelet
	I0224 13:11:19.249955  933673 kubeadm.go:582] duration metric: took 2.766242466s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0224 13:11:19.249980  933673 node_conditions.go:102] verifying NodePressure condition ...
	I0224 13:11:19.430208  933673 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0224 13:11:19.430236  933673 node_conditions.go:123] node cpu capacity is 2
	I0224 13:11:19.430250  933673 node_conditions.go:105] duration metric: took 180.263061ms to run NodePressure ...
	I0224 13:11:19.430264  933673 start.go:241] waiting for startup goroutines ...
	I0224 13:11:19.430271  933673 start.go:246] waiting for cluster config update ...
	I0224 13:11:19.430279  933673 start.go:255] writing updated cluster config ...
	I0224 13:11:19.430581  933673 ssh_runner.go:195] Run: rm -f paused
	I0224 13:11:19.485369  933673 start.go:600] kubectl: 1.32.2, cluster: 1.32.2 (minor skew: 0)
	I0224 13:11:19.487532  933673 out.go:177] * Done! kubectl is now configured to use "pause-290993" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 24 13:11:22 pause-290993 crio[2151]: time="2025-02-24 13:11:22.283734791Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1740402682283712903,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aea72861-fba3-4b7a-a706-c8d04faf2c91 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 13:11:22 pause-290993 crio[2151]: time="2025-02-24 13:11:22.284389148Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=513fd0cd-a524-4df3-bd81-6a9fc065ca2a name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:11:22 pause-290993 crio[2151]: time="2025-02-24 13:11:22.284470641Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=513fd0cd-a524-4df3-bd81-6a9fc065ca2a name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:11:22 pause-290993 crio[2151]: time="2025-02-24 13:11:22.284717531Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f8eafad8b80f5f834ac866ba0d91cb416b18b88038e42992066f312885fce532,PodSandboxId:a319254bf6791ae1dacb9c421789737d634cb354379f36a22949ee674ef31c91,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1740402661322676622,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mk2vg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cae36757-e93e-4727-9ed4-f05ee8363e3f,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a27b4fbb15791092457f489a9be8d91ddf82476ab6febce07f107071e2db5cd,PodSandboxId:4249891088dffed2075269b139dfb29730981c76ef01c04ede491f4452cb1f69,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1740402661303312710,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-sqwj8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 216792f5-1104-4be5-bd91-c56ec040853c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c271018fa2fb5d42d0f6f6dc99dcfddfff803db08293f738c60c8cb621ff670,PodSandboxId:69cf6bbd69250f376bb5138f9ba2b5d74bf60ab62bdbccf923ffe526a7b62562,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1740402657504389624,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-290993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f6cd22ad0f
de2cb33bbe8b1c4a5f91c,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b89c570f2f66d536871cd493c13ed4abe9852442af6b21ff0e0d0072895434aa,PodSandboxId:b497e8fb3520b1d31b88e637daade81fce0cb024591c3fd6c9618a2cbe9b7c95,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1740402657477598807,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-290993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b3d0156fc074bbb0220322372b6a858,},Annotations:map[string]s
tring{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:383050ee1a444fe799f2f435873bc41a638537b8728abb71472c282623fac7b5,PodSandboxId:e52ce03b3fbbb35e449a0a060a747ceb518040bcd9301ff40c016f6a58405762,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1740402657488728534,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-290993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c340e56a6c3ce70a38356e0ee1000e9c,},Annotations:map[string]string{io.kubernete
s.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29de9d999e399ffce5e959dcedd5d9cc494443ce36e3127e18f30c500e3de3fe,PodSandboxId:ccd28a2937c5660c8415bf35ceb663a8d7c7e8bdc39289257d7be1759a4bcb37,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1740402657458641736,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-290993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ad2b129b802c71f8413025523e947a,},Annotations:map[string]string{io.
kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f1ea71953e2c9b870aaeb5ec033589e6c70cf49214670b79afaab38ade0a6e7,PodSandboxId:4249891088dffed2075269b139dfb29730981c76ef01c04ede491f4452cb1f69,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1740402634300834244,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-sqwj8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 216792f5-1104-4be5-bd91-c56ec040853c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a2
04d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f69fc3df30b8e460e2137176d137957ab7add0f72b0fe68f02803a95f23ff915,PodSandboxId:a319254bf6791ae1dacb9c421789737d634cb354379f36a22949ee674ef31c91,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1740402633576570016,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-mk2vg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cae36757-e93e-4727-9ed4-f05ee8363e3f,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba98184a2895c8a63929c78ca71192f47dc2c5957bbe799ed35a24f9bfcd63eb,PodSandboxId:b497e8fb3520b1d31b88e637daade81fce0cb024591c3fd6c9618a2cbe9b7c95,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1740402633426032088,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-290993,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 1b3d0156fc074bbb0220322372b6a858,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1f36dc49e5c36127b0839d3e0b61bc18d838bb8a0c448b55d55f1a6191df34,PodSandboxId:e52ce03b3fbbb35e449a0a060a747ceb518040bcd9301ff40c016f6a58405762,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1740402633330735603,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-290993,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: c340e56a6c3ce70a38356e0ee1000e9c,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f1a789af555fca56718d4d1dbba6bd69970fc8e2158cfd4f1d4f49f36bfcfbc,PodSandboxId:ccd28a2937c5660c8415bf35ceb663a8d7c7e8bdc39289257d7be1759a4bcb37,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1740402633270459262,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-290993,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: e6ad2b129b802c71f8413025523e947a,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0eeb0c95859b126474282095e6479908c48c39791e874d93d6bb6eb25e0bbaa,PodSandboxId:69cf6bbd69250f376bb5138f9ba2b5d74bf60ab62bdbccf923ffe526a7b62562,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1740402633142679917,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-290993,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 0f6cd22ad0fde2cb33bbe8b1c4a5f91c,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=513fd0cd-a524-4df3-bd81-6a9fc065ca2a name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:11:22 pause-290993 crio[2151]: time="2025-02-24 13:11:22.328488287Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=618c3213-674b-4ab5-a98c-2d6ebb760d0c name=/runtime.v1.RuntimeService/Version
	Feb 24 13:11:22 pause-290993 crio[2151]: time="2025-02-24 13:11:22.328578023Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=618c3213-674b-4ab5-a98c-2d6ebb760d0c name=/runtime.v1.RuntimeService/Version
	Feb 24 13:11:22 pause-290993 crio[2151]: time="2025-02-24 13:11:22.330076659Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d2fc8dc4-4c6b-4159-939a-42599afff1cc name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 13:11:22 pause-290993 crio[2151]: time="2025-02-24 13:11:22.330544083Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1740402682330516798,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d2fc8dc4-4c6b-4159-939a-42599afff1cc name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 13:11:22 pause-290993 crio[2151]: time="2025-02-24 13:11:22.331394776Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3f774aa1-2016-451a-aae1-19afac9373f0 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:11:22 pause-290993 crio[2151]: time="2025-02-24 13:11:22.331451814Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3f774aa1-2016-451a-aae1-19afac9373f0 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:11:22 pause-290993 crio[2151]: time="2025-02-24 13:11:22.331715405Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f8eafad8b80f5f834ac866ba0d91cb416b18b88038e42992066f312885fce532,PodSandboxId:a319254bf6791ae1dacb9c421789737d634cb354379f36a22949ee674ef31c91,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1740402661322676622,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mk2vg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cae36757-e93e-4727-9ed4-f05ee8363e3f,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a27b4fbb15791092457f489a9be8d91ddf82476ab6febce07f107071e2db5cd,PodSandboxId:4249891088dffed2075269b139dfb29730981c76ef01c04ede491f4452cb1f69,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1740402661303312710,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-sqwj8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 216792f5-1104-4be5-bd91-c56ec040853c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c271018fa2fb5d42d0f6f6dc99dcfddfff803db08293f738c60c8cb621ff670,PodSandboxId:69cf6bbd69250f376bb5138f9ba2b5d74bf60ab62bdbccf923ffe526a7b62562,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1740402657504389624,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-290993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f6cd22ad0f
de2cb33bbe8b1c4a5f91c,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b89c570f2f66d536871cd493c13ed4abe9852442af6b21ff0e0d0072895434aa,PodSandboxId:b497e8fb3520b1d31b88e637daade81fce0cb024591c3fd6c9618a2cbe9b7c95,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1740402657477598807,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-290993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b3d0156fc074bbb0220322372b6a858,},Annotations:map[string]s
tring{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:383050ee1a444fe799f2f435873bc41a638537b8728abb71472c282623fac7b5,PodSandboxId:e52ce03b3fbbb35e449a0a060a747ceb518040bcd9301ff40c016f6a58405762,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1740402657488728534,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-290993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c340e56a6c3ce70a38356e0ee1000e9c,},Annotations:map[string]string{io.kubernete
s.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29de9d999e399ffce5e959dcedd5d9cc494443ce36e3127e18f30c500e3de3fe,PodSandboxId:ccd28a2937c5660c8415bf35ceb663a8d7c7e8bdc39289257d7be1759a4bcb37,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1740402657458641736,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-290993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ad2b129b802c71f8413025523e947a,},Annotations:map[string]string{io.
kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f1ea71953e2c9b870aaeb5ec033589e6c70cf49214670b79afaab38ade0a6e7,PodSandboxId:4249891088dffed2075269b139dfb29730981c76ef01c04ede491f4452cb1f69,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1740402634300834244,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-sqwj8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 216792f5-1104-4be5-bd91-c56ec040853c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a2
04d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f69fc3df30b8e460e2137176d137957ab7add0f72b0fe68f02803a95f23ff915,PodSandboxId:a319254bf6791ae1dacb9c421789737d634cb354379f36a22949ee674ef31c91,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1740402633576570016,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-mk2vg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cae36757-e93e-4727-9ed4-f05ee8363e3f,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba98184a2895c8a63929c78ca71192f47dc2c5957bbe799ed35a24f9bfcd63eb,PodSandboxId:b497e8fb3520b1d31b88e637daade81fce0cb024591c3fd6c9618a2cbe9b7c95,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1740402633426032088,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-290993,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 1b3d0156fc074bbb0220322372b6a858,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1f36dc49e5c36127b0839d3e0b61bc18d838bb8a0c448b55d55f1a6191df34,PodSandboxId:e52ce03b3fbbb35e449a0a060a747ceb518040bcd9301ff40c016f6a58405762,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1740402633330735603,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-290993,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: c340e56a6c3ce70a38356e0ee1000e9c,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f1a789af555fca56718d4d1dbba6bd69970fc8e2158cfd4f1d4f49f36bfcfbc,PodSandboxId:ccd28a2937c5660c8415bf35ceb663a8d7c7e8bdc39289257d7be1759a4bcb37,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1740402633270459262,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-290993,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: e6ad2b129b802c71f8413025523e947a,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0eeb0c95859b126474282095e6479908c48c39791e874d93d6bb6eb25e0bbaa,PodSandboxId:69cf6bbd69250f376bb5138f9ba2b5d74bf60ab62bdbccf923ffe526a7b62562,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1740402633142679917,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-290993,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 0f6cd22ad0fde2cb33bbe8b1c4a5f91c,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3f774aa1-2016-451a-aae1-19afac9373f0 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:11:22 pause-290993 crio[2151]: time="2025-02-24 13:11:22.375781754Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e3c5f760-927b-4a35-ad7f-46d78b503ab1 name=/runtime.v1.RuntimeService/Version
	Feb 24 13:11:22 pause-290993 crio[2151]: time="2025-02-24 13:11:22.375876112Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e3c5f760-927b-4a35-ad7f-46d78b503ab1 name=/runtime.v1.RuntimeService/Version
	Feb 24 13:11:22 pause-290993 crio[2151]: time="2025-02-24 13:11:22.377137666Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e2c352ab-2892-47d5-84ef-12d192f3bce4 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 13:11:22 pause-290993 crio[2151]: time="2025-02-24 13:11:22.377626985Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1740402682377601043,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e2c352ab-2892-47d5-84ef-12d192f3bce4 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 13:11:22 pause-290993 crio[2151]: time="2025-02-24 13:11:22.378382451Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1ab04d9e-8db1-49fb-ae41-ccd4de722ff9 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:11:22 pause-290993 crio[2151]: time="2025-02-24 13:11:22.378455831Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1ab04d9e-8db1-49fb-ae41-ccd4de722ff9 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:11:22 pause-290993 crio[2151]: time="2025-02-24 13:11:22.379017528Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f8eafad8b80f5f834ac866ba0d91cb416b18b88038e42992066f312885fce532,PodSandboxId:a319254bf6791ae1dacb9c421789737d634cb354379f36a22949ee674ef31c91,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1740402661322676622,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mk2vg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cae36757-e93e-4727-9ed4-f05ee8363e3f,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a27b4fbb15791092457f489a9be8d91ddf82476ab6febce07f107071e2db5cd,PodSandboxId:4249891088dffed2075269b139dfb29730981c76ef01c04ede491f4452cb1f69,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1740402661303312710,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-sqwj8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 216792f5-1104-4be5-bd91-c56ec040853c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c271018fa2fb5d42d0f6f6dc99dcfddfff803db08293f738c60c8cb621ff670,PodSandboxId:69cf6bbd69250f376bb5138f9ba2b5d74bf60ab62bdbccf923ffe526a7b62562,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1740402657504389624,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-290993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f6cd22ad0f
de2cb33bbe8b1c4a5f91c,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b89c570f2f66d536871cd493c13ed4abe9852442af6b21ff0e0d0072895434aa,PodSandboxId:b497e8fb3520b1d31b88e637daade81fce0cb024591c3fd6c9618a2cbe9b7c95,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1740402657477598807,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-290993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b3d0156fc074bbb0220322372b6a858,},Annotations:map[string]s
tring{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:383050ee1a444fe799f2f435873bc41a638537b8728abb71472c282623fac7b5,PodSandboxId:e52ce03b3fbbb35e449a0a060a747ceb518040bcd9301ff40c016f6a58405762,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1740402657488728534,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-290993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c340e56a6c3ce70a38356e0ee1000e9c,},Annotations:map[string]string{io.kubernete
s.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29de9d999e399ffce5e959dcedd5d9cc494443ce36e3127e18f30c500e3de3fe,PodSandboxId:ccd28a2937c5660c8415bf35ceb663a8d7c7e8bdc39289257d7be1759a4bcb37,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1740402657458641736,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-290993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ad2b129b802c71f8413025523e947a,},Annotations:map[string]string{io.
kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f1ea71953e2c9b870aaeb5ec033589e6c70cf49214670b79afaab38ade0a6e7,PodSandboxId:4249891088dffed2075269b139dfb29730981c76ef01c04ede491f4452cb1f69,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1740402634300834244,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-sqwj8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 216792f5-1104-4be5-bd91-c56ec040853c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a2
04d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f69fc3df30b8e460e2137176d137957ab7add0f72b0fe68f02803a95f23ff915,PodSandboxId:a319254bf6791ae1dacb9c421789737d634cb354379f36a22949ee674ef31c91,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1740402633576570016,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-mk2vg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cae36757-e93e-4727-9ed4-f05ee8363e3f,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba98184a2895c8a63929c78ca71192f47dc2c5957bbe799ed35a24f9bfcd63eb,PodSandboxId:b497e8fb3520b1d31b88e637daade81fce0cb024591c3fd6c9618a2cbe9b7c95,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1740402633426032088,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-290993,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 1b3d0156fc074bbb0220322372b6a858,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1f36dc49e5c36127b0839d3e0b61bc18d838bb8a0c448b55d55f1a6191df34,PodSandboxId:e52ce03b3fbbb35e449a0a060a747ceb518040bcd9301ff40c016f6a58405762,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1740402633330735603,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-290993,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: c340e56a6c3ce70a38356e0ee1000e9c,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f1a789af555fca56718d4d1dbba6bd69970fc8e2158cfd4f1d4f49f36bfcfbc,PodSandboxId:ccd28a2937c5660c8415bf35ceb663a8d7c7e8bdc39289257d7be1759a4bcb37,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1740402633270459262,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-290993,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: e6ad2b129b802c71f8413025523e947a,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0eeb0c95859b126474282095e6479908c48c39791e874d93d6bb6eb25e0bbaa,PodSandboxId:69cf6bbd69250f376bb5138f9ba2b5d74bf60ab62bdbccf923ffe526a7b62562,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1740402633142679917,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-290993,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 0f6cd22ad0fde2cb33bbe8b1c4a5f91c,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1ab04d9e-8db1-49fb-ae41-ccd4de722ff9 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:11:22 pause-290993 crio[2151]: time="2025-02-24 13:11:22.425999890Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=684247eb-4b15-4ce1-b46c-96efc1823d3d name=/runtime.v1.RuntimeService/Version
	Feb 24 13:11:22 pause-290993 crio[2151]: time="2025-02-24 13:11:22.426091779Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=684247eb-4b15-4ce1-b46c-96efc1823d3d name=/runtime.v1.RuntimeService/Version
	Feb 24 13:11:22 pause-290993 crio[2151]: time="2025-02-24 13:11:22.427340216Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=17107951-1889-45f3-aceb-992cd2ff3a07 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 13:11:22 pause-290993 crio[2151]: time="2025-02-24 13:11:22.427936532Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1740402682427912289,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=17107951-1889-45f3-aceb-992cd2ff3a07 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 13:11:22 pause-290993 crio[2151]: time="2025-02-24 13:11:22.428684558Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ae62e5ca-b610-48e1-8f1c-73e56c77b162 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:11:22 pause-290993 crio[2151]: time="2025-02-24 13:11:22.428757939Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ae62e5ca-b610-48e1-8f1c-73e56c77b162 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:11:22 pause-290993 crio[2151]: time="2025-02-24 13:11:22.428987669Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f8eafad8b80f5f834ac866ba0d91cb416b18b88038e42992066f312885fce532,PodSandboxId:a319254bf6791ae1dacb9c421789737d634cb354379f36a22949ee674ef31c91,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1740402661322676622,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mk2vg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cae36757-e93e-4727-9ed4-f05ee8363e3f,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a27b4fbb15791092457f489a9be8d91ddf82476ab6febce07f107071e2db5cd,PodSandboxId:4249891088dffed2075269b139dfb29730981c76ef01c04ede491f4452cb1f69,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1740402661303312710,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-sqwj8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 216792f5-1104-4be5-bd91-c56ec040853c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c271018fa2fb5d42d0f6f6dc99dcfddfff803db08293f738c60c8cb621ff670,PodSandboxId:69cf6bbd69250f376bb5138f9ba2b5d74bf60ab62bdbccf923ffe526a7b62562,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1740402657504389624,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-290993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f6cd22ad0f
de2cb33bbe8b1c4a5f91c,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b89c570f2f66d536871cd493c13ed4abe9852442af6b21ff0e0d0072895434aa,PodSandboxId:b497e8fb3520b1d31b88e637daade81fce0cb024591c3fd6c9618a2cbe9b7c95,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1740402657477598807,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-290993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b3d0156fc074bbb0220322372b6a858,},Annotations:map[string]s
tring{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:383050ee1a444fe799f2f435873bc41a638537b8728abb71472c282623fac7b5,PodSandboxId:e52ce03b3fbbb35e449a0a060a747ceb518040bcd9301ff40c016f6a58405762,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1740402657488728534,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-290993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c340e56a6c3ce70a38356e0ee1000e9c,},Annotations:map[string]string{io.kubernete
s.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29de9d999e399ffce5e959dcedd5d9cc494443ce36e3127e18f30c500e3de3fe,PodSandboxId:ccd28a2937c5660c8415bf35ceb663a8d7c7e8bdc39289257d7be1759a4bcb37,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1740402657458641736,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-290993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ad2b129b802c71f8413025523e947a,},Annotations:map[string]string{io.
kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f1ea71953e2c9b870aaeb5ec033589e6c70cf49214670b79afaab38ade0a6e7,PodSandboxId:4249891088dffed2075269b139dfb29730981c76ef01c04ede491f4452cb1f69,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1740402634300834244,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-sqwj8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 216792f5-1104-4be5-bd91-c56ec040853c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a2
04d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f69fc3df30b8e460e2137176d137957ab7add0f72b0fe68f02803a95f23ff915,PodSandboxId:a319254bf6791ae1dacb9c421789737d634cb354379f36a22949ee674ef31c91,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1740402633576570016,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-mk2vg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cae36757-e93e-4727-9ed4-f05ee8363e3f,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba98184a2895c8a63929c78ca71192f47dc2c5957bbe799ed35a24f9bfcd63eb,PodSandboxId:b497e8fb3520b1d31b88e637daade81fce0cb024591c3fd6c9618a2cbe9b7c95,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1740402633426032088,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-290993,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 1b3d0156fc074bbb0220322372b6a858,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1f36dc49e5c36127b0839d3e0b61bc18d838bb8a0c448b55d55f1a6191df34,PodSandboxId:e52ce03b3fbbb35e449a0a060a747ceb518040bcd9301ff40c016f6a58405762,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1740402633330735603,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-290993,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: c340e56a6c3ce70a38356e0ee1000e9c,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f1a789af555fca56718d4d1dbba6bd69970fc8e2158cfd4f1d4f49f36bfcfbc,PodSandboxId:ccd28a2937c5660c8415bf35ceb663a8d7c7e8bdc39289257d7be1759a4bcb37,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1740402633270459262,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-290993,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: e6ad2b129b802c71f8413025523e947a,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0eeb0c95859b126474282095e6479908c48c39791e874d93d6bb6eb25e0bbaa,PodSandboxId:69cf6bbd69250f376bb5138f9ba2b5d74bf60ab62bdbccf923ffe526a7b62562,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1740402633142679917,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-290993,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 0f6cd22ad0fde2cb33bbe8b1c4a5f91c,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ae62e5ca-b610-48e1-8f1c-73e56c77b162 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f8eafad8b80f5       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   21 seconds ago      Running             kube-proxy                2                   a319254bf6791       kube-proxy-mk2vg
	5a27b4fbb1579       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   21 seconds ago      Running             coredns                   2                   4249891088dff       coredns-668d6bf9bc-sqwj8
	2c271018fa2fb       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   24 seconds ago      Running             kube-scheduler            2                   69cf6bbd69250       kube-scheduler-pause-290993
	383050ee1a444       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   25 seconds ago      Running             kube-apiserver            2                   e52ce03b3fbbb       kube-apiserver-pause-290993
	b89c570f2f66d       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   25 seconds ago      Running             etcd                      2                   b497e8fb3520b       etcd-pause-290993
	29de9d999e399       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   25 seconds ago      Running             kube-controller-manager   2                   ccd28a2937c56       kube-controller-manager-pause-290993
	8f1ea71953e2c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   48 seconds ago      Exited              coredns                   1                   4249891088dff       coredns-668d6bf9bc-sqwj8
	f69fc3df30b8e       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   48 seconds ago      Exited              kube-proxy                1                   a319254bf6791       kube-proxy-mk2vg
	ba98184a2895c       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   49 seconds ago      Exited              etcd                      1                   b497e8fb3520b       etcd-pause-290993
	ae1f36dc49e5c       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   49 seconds ago      Exited              kube-apiserver            1                   e52ce03b3fbbb       kube-apiserver-pause-290993
	5f1a789af555f       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   49 seconds ago      Exited              kube-controller-manager   1                   ccd28a2937c56       kube-controller-manager-pause-290993
	c0eeb0c95859b       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   49 seconds ago      Exited              kube-scheduler            1                   69cf6bbd69250       kube-scheduler-pause-290993
	
	
	==> coredns [5a27b4fbb15791092457f489a9be8d91ddf82476ab6febce07f107071e2db5cd] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:55164 - 37126 "HINFO IN 6678825283635531683.3778464112934933217. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012455731s
	
	
	==> coredns [8f1ea71953e2c9b870aaeb5ec033589e6c70cf49214670b79afaab38ade0a6e7] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:43817 - 12111 "HINFO IN 5517621473291706977.9065500291906123585. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015152925s
	
	
	==> describe nodes <==
	Name:               pause-290993
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-290993
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b76650f53499dbb51707efa4a87e94b72d747650
	                    minikube.k8s.io/name=pause-290993
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_24T13_10_04_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Feb 2025 13:10:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-290993
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Feb 2025 13:11:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Feb 2025 13:11:00 +0000   Mon, 24 Feb 2025 13:09:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Feb 2025 13:11:00 +0000   Mon, 24 Feb 2025 13:09:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Feb 2025 13:11:00 +0000   Mon, 24 Feb 2025 13:09:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Feb 2025 13:11:00 +0000   Mon, 24 Feb 2025 13:10:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.181
	  Hostname:    pause-290993
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 a42fa780a1954c298dac98a527ed2671
	  System UUID:                a42fa780-a195-4c29-8dac-98a527ed2671
	  Boot ID:                    506e35b5-4f60-4410-a994-e4f783351032
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-sqwj8                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     74s
	  kube-system                 etcd-pause-290993                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         79s
	  kube-system                 kube-apiserver-pause-290993             250m (12%)    0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-controller-manager-pause-290993    200m (10%)    0 (0%)      0 (0%)           0 (0%)         80s
	  kube-system                 kube-proxy-mk2vg                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 kube-scheduler-pause-290993             100m (5%)     0 (0%)      0 (0%)           0 (0%)         79s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 72s                kube-proxy       
	  Normal  Starting                 21s                kube-proxy       
	  Normal  Starting                 45s                kube-proxy       
	  Normal  NodeHasNoDiskPressure    86s (x6 over 86s)  kubelet          Node pause-290993 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  86s (x7 over 86s)  kubelet          Node pause-290993 status is now: NodeHasSufficientMemory
	  Normal  Starting                 86s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     86s (x6 over 86s)  kubelet          Node pause-290993 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  86s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     79s                kubelet          Node pause-290993 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  79s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  79s                kubelet          Node pause-290993 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    79s                kubelet          Node pause-290993 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 79s                kubelet          Starting kubelet.
	  Normal  NodeReady                78s                kubelet          Node pause-290993 status is now: NodeReady
	  Normal  RegisteredNode           75s                node-controller  Node pause-290993 event: Registered Node pause-290993 in Controller
	  Normal  RegisteredNode           43s                node-controller  Node pause-290993 event: Registered Node pause-290993 in Controller
	  Normal  Starting                 26s                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    25s (x8 over 25s)  kubelet          Node pause-290993 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  25s (x8 over 25s)  kubelet          Node pause-290993 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     25s (x7 over 25s)  kubelet          Node pause-290993 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19s                node-controller  Node pause-290993 event: Registered Node pause-290993 in Controller
	
	
	==> dmesg <==
	[  +7.245316] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.064177] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075300] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.233713] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.146312] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.318258] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +4.416729] systemd-fstab-generator[748]: Ignoring "noauto" option for root device
	[  +0.061068] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.546452] systemd-fstab-generator[885]: Ignoring "noauto" option for root device
	[  +0.791377] kauditd_printk_skb: 46 callbacks suppressed
	[Feb24 13:10] systemd-fstab-generator[1248]: Ignoring "noauto" option for root device
	[  +0.180006] kauditd_printk_skb: 41 callbacks suppressed
	[  +4.926285] systemd-fstab-generator[1379]: Ignoring "noauto" option for root device
	[  +0.171707] kauditd_printk_skb: 21 callbacks suppressed
	[ +23.028860] systemd-fstab-generator[2073]: Ignoring "noauto" option for root device
	[  +0.075377] kauditd_printk_skb: 69 callbacks suppressed
	[  +0.067245] systemd-fstab-generator[2085]: Ignoring "noauto" option for root device
	[  +0.168852] systemd-fstab-generator[2099]: Ignoring "noauto" option for root device
	[  +0.144497] systemd-fstab-generator[2112]: Ignoring "noauto" option for root device
	[  +0.279329] systemd-fstab-generator[2140]: Ignoring "noauto" option for root device
	[  +0.748453] systemd-fstab-generator[2272]: Ignoring "noauto" option for root device
	[  +4.217246] kauditd_printk_skb: 195 callbacks suppressed
	[ +19.709038] systemd-fstab-generator[3196]: Ignoring "noauto" option for root device
	[Feb24 13:11] kauditd_printk_skb: 53 callbacks suppressed
	[  +5.350455] systemd-fstab-generator[3677]: Ignoring "noauto" option for root device
	
	
	==> etcd [b89c570f2f66d536871cd493c13ed4abe9852442af6b21ff0e0d0072895434aa] <==
	{"level":"info","ts":"2025-02-24T13:10:57.816602Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-02-24T13:10:57.816630Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-02-24T13:10:57.816636Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-02-24T13:10:57.817271Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.72.181:2380"}
	{"level":"info","ts":"2025-02-24T13:10:57.817308Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.72.181:2380"}
	{"level":"info","ts":"2025-02-24T13:10:57.817935Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b074c01599da0f0 switched to configuration voters=(11170980969397985520)"}
	{"level":"info","ts":"2025-02-24T13:10:57.817982Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"38acd6cd2f67011f","local-member-id":"9b074c01599da0f0","added-peer-id":"9b074c01599da0f0","added-peer-peer-urls":["https://192.168.72.181:2380"]}
	{"level":"info","ts":"2025-02-24T13:10:57.818061Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"38acd6cd2f67011f","local-member-id":"9b074c01599da0f0","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-24T13:10:57.818082Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-24T13:10:58.865290Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b074c01599da0f0 is starting a new election at term 3"}
	{"level":"info","ts":"2025-02-24T13:10:58.865407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b074c01599da0f0 became pre-candidate at term 3"}
	{"level":"info","ts":"2025-02-24T13:10:58.865456Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b074c01599da0f0 received MsgPreVoteResp from 9b074c01599da0f0 at term 3"}
	{"level":"info","ts":"2025-02-24T13:10:58.865487Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b074c01599da0f0 became candidate at term 4"}
	{"level":"info","ts":"2025-02-24T13:10:58.865508Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b074c01599da0f0 received MsgVoteResp from 9b074c01599da0f0 at term 4"}
	{"level":"info","ts":"2025-02-24T13:10:58.865530Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b074c01599da0f0 became leader at term 4"}
	{"level":"info","ts":"2025-02-24T13:10:58.865548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9b074c01599da0f0 elected leader 9b074c01599da0f0 at term 4"}
	{"level":"info","ts":"2025-02-24T13:10:58.868113Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"9b074c01599da0f0","local-member-attributes":"{Name:pause-290993 ClientURLs:[https://192.168.72.181:2379]}","request-path":"/0/members/9b074c01599da0f0/attributes","cluster-id":"38acd6cd2f67011f","publish-timeout":"7s"}
	{"level":"info","ts":"2025-02-24T13:10:58.868295Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-24T13:10:58.868347Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-24T13:10:58.868693Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-02-24T13:10:58.868729Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-02-24T13:10:58.869087Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-24T13:10:58.869523Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-24T13:10:58.870985Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-02-24T13:10:58.869747Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.181:2379"}
	
	
	==> etcd [ba98184a2895c8a63929c78ca71192f47dc2c5957bbe799ed35a24f9bfcd63eb] <==
	{"level":"info","ts":"2025-02-24T13:10:34.867354Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b074c01599da0f0 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-02-24T13:10:34.867382Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b074c01599da0f0 received MsgPreVoteResp from 9b074c01599da0f0 at term 2"}
	{"level":"info","ts":"2025-02-24T13:10:34.867395Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b074c01599da0f0 became candidate at term 3"}
	{"level":"info","ts":"2025-02-24T13:10:34.867404Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b074c01599da0f0 received MsgVoteResp from 9b074c01599da0f0 at term 3"}
	{"level":"info","ts":"2025-02-24T13:10:34.867412Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b074c01599da0f0 became leader at term 3"}
	{"level":"info","ts":"2025-02-24T13:10:34.867419Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9b074c01599da0f0 elected leader 9b074c01599da0f0 at term 3"}
	{"level":"info","ts":"2025-02-24T13:10:34.871433Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"9b074c01599da0f0","local-member-attributes":"{Name:pause-290993 ClientURLs:[https://192.168.72.181:2379]}","request-path":"/0/members/9b074c01599da0f0/attributes","cluster-id":"38acd6cd2f67011f","publish-timeout":"7s"}
	{"level":"info","ts":"2025-02-24T13:10:34.871561Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-24T13:10:34.871624Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-24T13:10:34.872510Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-24T13:10:34.875832Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.181:2379"}
	{"level":"info","ts":"2025-02-24T13:10:34.878485Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-02-24T13:10:34.878554Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-02-24T13:10:34.878945Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-24T13:10:34.879601Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-02-24T13:10:45.008167Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-02-24T13:10:45.008298Z","caller":"embed/etcd.go:378","msg":"closing etcd server","name":"pause-290993","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.181:2380"],"advertise-client-urls":["https://192.168.72.181:2379"]}
	{"level":"warn","ts":"2025-02-24T13:10:45.008377Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-24T13:10:45.008452Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-24T13:10:45.027902Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.72.181:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-24T13:10:45.028001Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.72.181:2379: use of closed network connection"}
	{"level":"info","ts":"2025-02-24T13:10:45.029541Z","caller":"etcdserver/server.go:1543","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9b074c01599da0f0","current-leader-member-id":"9b074c01599da0f0"}
	{"level":"info","ts":"2025-02-24T13:10:45.037157Z","caller":"embed/etcd.go:582","msg":"stopping serving peer traffic","address":"192.168.72.181:2380"}
	{"level":"info","ts":"2025-02-24T13:10:45.037690Z","caller":"embed/etcd.go:587","msg":"stopped serving peer traffic","address":"192.168.72.181:2380"}
	{"level":"info","ts":"2025-02-24T13:10:45.037738Z","caller":"embed/etcd.go:380","msg":"closed etcd server","name":"pause-290993","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.181:2380"],"advertise-client-urls":["https://192.168.72.181:2379"]}
	
	
	==> kernel <==
	 13:11:22 up 1 min,  0 users,  load average: 0.98, 0.40, 0.15
	Linux pause-290993 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [383050ee1a444fe799f2f435873bc41a638537b8728abb71472c282623fac7b5] <==
	I0224 13:11:00.328157       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0224 13:11:00.328407       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0224 13:11:00.328477       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0224 13:11:00.334848       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0224 13:11:00.334934       1 policy_source.go:240] refreshing policies
	I0224 13:11:00.349977       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0224 13:11:00.374638       1 shared_informer.go:320] Caches are synced for configmaps
	I0224 13:11:00.375082       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0224 13:11:00.375181       1 aggregator.go:171] initial CRD sync complete...
	I0224 13:11:00.375246       1 autoregister_controller.go:144] Starting autoregister controller
	I0224 13:11:00.375253       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0224 13:11:00.375259       1 cache.go:39] Caches are synced for autoregister controller
	I0224 13:11:00.378938       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0224 13:11:00.385138       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0224 13:11:00.385931       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E0224 13:11:00.398589       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0224 13:11:01.083057       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0224 13:11:01.179975       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0224 13:11:02.280046       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0224 13:11:02.335176       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0224 13:11:02.379120       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0224 13:11:02.395025       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0224 13:11:11.180023       1 controller.go:615] quota admission added evaluator for: endpoints
	I0224 13:11:11.181137       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0224 13:11:11.181550       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-apiserver [ae1f36dc49e5c36127b0839d3e0b61bc18d838bb8a0c448b55d55f1a6191df34] <==
	W0224 13:10:54.529296       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:10:54.546998       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:10:54.570140       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:10:54.604488       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:10:54.619341       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:10:54.633066       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:10:54.645820       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:10:54.666306       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:10:54.717133       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:10:54.732605       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:10:54.738463       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:10:54.791058       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:10:54.823828       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:10:54.840507       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:10:54.856343       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:10:54.920431       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:10:54.935127       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:10:54.946746       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:10:55.002983       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:10:55.013064       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:10:55.039734       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:10:55.087345       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:10:55.091957       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:10:55.093276       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 13:10:55.131852       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [29de9d999e399ffce5e959dcedd5d9cc494443ce36e3127e18f30c500e3de3fe] <==
	I0224 13:11:03.482454       1 shared_informer.go:320] Caches are synced for taint
	I0224 13:11:03.482671       1 shared_informer.go:320] Caches are synced for endpoint
	I0224 13:11:03.484178       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0224 13:11:03.485408       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-290993"
	I0224 13:11:03.485547       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0224 13:11:03.487318       1 shared_informer.go:320] Caches are synced for stateful set
	I0224 13:11:03.487415       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0224 13:11:03.487606       1 shared_informer.go:320] Caches are synced for garbage collector
	I0224 13:11:03.487675       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0224 13:11:03.487698       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0224 13:11:03.488031       1 shared_informer.go:320] Caches are synced for crt configmap
	I0224 13:11:03.490738       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0224 13:11:03.494535       1 shared_informer.go:320] Caches are synced for node
	I0224 13:11:03.494794       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0224 13:11:03.495002       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0224 13:11:03.495036       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0224 13:11:03.495144       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0224 13:11:03.495379       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-290993"
	I0224 13:11:03.495520       1 shared_informer.go:320] Caches are synced for TTL
	I0224 13:11:03.497431       1 shared_informer.go:320] Caches are synced for attach detach
	I0224 13:11:03.497877       1 shared_informer.go:320] Caches are synced for resource quota
	I0224 13:11:03.524449       1 shared_informer.go:320] Caches are synced for garbage collector
	I0224 13:11:03.525743       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0224 13:11:11.193424       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="29.72206ms"
	I0224 13:11:11.193873       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="64.666µs"
	
	
	==> kube-controller-manager [5f1a789af555fca56718d4d1dbba6bd69970fc8e2158cfd4f1d4f49f36bfcfbc] <==
	I0224 13:10:39.802508       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0224 13:10:39.804407       1 shared_informer.go:320] Caches are synced for endpoint
	I0224 13:10:39.805974       1 shared_informer.go:320] Caches are synced for persistent volume
	I0224 13:10:39.813339       1 shared_informer.go:320] Caches are synced for job
	I0224 13:10:39.813495       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0224 13:10:39.813635       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0224 13:10:39.813663       1 shared_informer.go:320] Caches are synced for crt configmap
	I0224 13:10:39.813759       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0224 13:10:39.813782       1 shared_informer.go:320] Caches are synced for HPA
	I0224 13:10:39.813870       1 shared_informer.go:320] Caches are synced for PVC protection
	I0224 13:10:39.813768       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0224 13:10:39.814023       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="62.077µs"
	I0224 13:10:39.821511       1 shared_informer.go:320] Caches are synced for resource quota
	I0224 13:10:39.824791       1 shared_informer.go:320] Caches are synced for node
	I0224 13:10:39.824858       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0224 13:10:39.824913       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0224 13:10:39.824919       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0224 13:10:39.824924       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0224 13:10:39.825006       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-290993"
	I0224 13:10:39.830506       1 shared_informer.go:320] Caches are synced for service account
	I0224 13:10:39.832878       1 shared_informer.go:320] Caches are synced for ephemeral
	I0224 13:10:39.835164       1 shared_informer.go:320] Caches are synced for garbage collector
	I0224 13:10:39.844421       1 shared_informer.go:320] Caches are synced for namespace
	I0224 13:10:39.846800       1 shared_informer.go:320] Caches are synced for GC
	I0224 13:10:44.979338       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="164.69µs"
	
	
	==> kube-proxy [f69fc3df30b8e460e2137176d137957ab7add0f72b0fe68f02803a95f23ff915] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0224 13:10:35.203367       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0224 13:10:36.730409       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.72.181"]
	E0224 13:10:36.730850       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0224 13:10:36.806024       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0224 13:10:36.806131       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0224 13:10:36.806169       1 server_linux.go:170] "Using iptables Proxier"
	I0224 13:10:36.808971       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0224 13:10:36.810019       1 server.go:497] "Version info" version="v1.32.2"
	I0224 13:10:36.810087       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0224 13:10:36.817890       1 config.go:105] "Starting endpoint slice config controller"
	I0224 13:10:36.817951       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0224 13:10:36.818022       1 config.go:199] "Starting service config controller"
	I0224 13:10:36.818047       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0224 13:10:36.834449       1 config.go:329] "Starting node config controller"
	I0224 13:10:36.834464       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0224 13:10:36.918733       1 shared_informer.go:320] Caches are synced for service config
	I0224 13:10:36.918953       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0224 13:10:36.934539       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [f8eafad8b80f5f834ac866ba0d91cb416b18b88038e42992066f312885fce532] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0224 13:11:01.596127       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0224 13:11:01.606916       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.72.181"]
	E0224 13:11:01.606992       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0224 13:11:01.653621       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0224 13:11:01.653684       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0224 13:11:01.653711       1 server_linux.go:170] "Using iptables Proxier"
	I0224 13:11:01.656596       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0224 13:11:01.656895       1 server.go:497] "Version info" version="v1.32.2"
	I0224 13:11:01.656925       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0224 13:11:01.658994       1 config.go:199] "Starting service config controller"
	I0224 13:11:01.659078       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0224 13:11:01.659118       1 config.go:105] "Starting endpoint slice config controller"
	I0224 13:11:01.659123       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0224 13:11:01.659543       1 config.go:329] "Starting node config controller"
	I0224 13:11:01.659576       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0224 13:11:01.759415       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0224 13:11:01.759512       1 shared_informer.go:320] Caches are synced for service config
	I0224 13:11:01.759693       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2c271018fa2fb5d42d0f6f6dc99dcfddfff803db08293f738c60c8cb621ff670] <==
	I0224 13:10:58.552055       1 serving.go:386] Generated self-signed cert in-memory
	W0224 13:11:00.278401       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0224 13:11:00.278582       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0224 13:11:00.278592       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0224 13:11:00.278599       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0224 13:11:00.373015       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0224 13:11:00.373119       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0224 13:11:00.377662       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0224 13:11:00.377963       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0224 13:11:00.378013       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0224 13:11:00.378043       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0224 13:11:00.478869       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [c0eeb0c95859b126474282095e6479908c48c39791e874d93d6bb6eb25e0bbaa] <==
	I0224 13:10:35.065607       1 serving.go:386] Generated self-signed cert in-memory
	W0224 13:10:36.568140       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0224 13:10:36.568447       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0224 13:10:36.568550       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0224 13:10:36.568658       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0224 13:10:36.635765       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0224 13:10:36.637740       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0224 13:10:36.644657       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0224 13:10:36.646394       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0224 13:10:36.651284       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0224 13:10:36.646419       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0224 13:10:36.752734       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0224 13:10:55.372901       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Feb 24 13:11:00 pause-290993 kubelet[3203]: E0224 13:11:00.160836    3203 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-290993\" not found" node="pause-290993"
	Feb 24 13:11:00 pause-290993 kubelet[3203]: I0224 13:11:00.298599    3203 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-290993"
	Feb 24 13:11:00 pause-290993 kubelet[3203]: E0224 13:11:00.388504    3203 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-290993\" already exists" pod="kube-system/kube-controller-manager-pause-290993"
	Feb 24 13:11:00 pause-290993 kubelet[3203]: I0224 13:11:00.388553    3203 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-290993"
	Feb 24 13:11:00 pause-290993 kubelet[3203]: E0224 13:11:00.404412    3203 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-290993\" already exists" pod="kube-system/kube-scheduler-pause-290993"
	Feb 24 13:11:00 pause-290993 kubelet[3203]: I0224 13:11:00.404596    3203 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-290993"
	Feb 24 13:11:00 pause-290993 kubelet[3203]: E0224 13:11:00.415345    3203 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-pause-290993\" already exists" pod="kube-system/etcd-pause-290993"
	Feb 24 13:11:00 pause-290993 kubelet[3203]: I0224 13:11:00.415393    3203 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-290993"
	Feb 24 13:11:00 pause-290993 kubelet[3203]: E0224 13:11:00.430409    3203 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-290993\" already exists" pod="kube-system/kube-apiserver-pause-290993"
	Feb 24 13:11:00 pause-290993 kubelet[3203]: I0224 13:11:00.433528    3203 kubelet_node_status.go:125] "Node was previously registered" node="pause-290993"
	Feb 24 13:11:00 pause-290993 kubelet[3203]: I0224 13:11:00.433631    3203 kubelet_node_status.go:79] "Successfully registered node" node="pause-290993"
	Feb 24 13:11:00 pause-290993 kubelet[3203]: I0224 13:11:00.433663    3203 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Feb 24 13:11:00 pause-290993 kubelet[3203]: I0224 13:11:00.434583    3203 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Feb 24 13:11:00 pause-290993 kubelet[3203]: I0224 13:11:00.967892    3203 apiserver.go:52] "Watching apiserver"
	Feb 24 13:11:00 pause-290993 kubelet[3203]: I0224 13:11:00.988606    3203 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Feb 24 13:11:01 pause-290993 kubelet[3203]: I0224 13:11:01.079152    3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cae36757-e93e-4727-9ed4-f05ee8363e3f-lib-modules\") pod \"kube-proxy-mk2vg\" (UID: \"cae36757-e93e-4727-9ed4-f05ee8363e3f\") " pod="kube-system/kube-proxy-mk2vg"
	Feb 24 13:11:01 pause-290993 kubelet[3203]: I0224 13:11:01.079277    3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cae36757-e93e-4727-9ed4-f05ee8363e3f-xtables-lock\") pod \"kube-proxy-mk2vg\" (UID: \"cae36757-e93e-4727-9ed4-f05ee8363e3f\") " pod="kube-system/kube-proxy-mk2vg"
	Feb 24 13:11:01 pause-290993 kubelet[3203]: I0224 13:11:01.274183    3203 scope.go:117] "RemoveContainer" containerID="8f1ea71953e2c9b870aaeb5ec033589e6c70cf49214670b79afaab38ade0a6e7"
	Feb 24 13:11:01 pause-290993 kubelet[3203]: I0224 13:11:01.276303    3203 scope.go:117] "RemoveContainer" containerID="f69fc3df30b8e460e2137176d137957ab7add0f72b0fe68f02803a95f23ff915"
	Feb 24 13:11:03 pause-290993 kubelet[3203]: I0224 13:11:03.182525    3203 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Feb 24 13:11:07 pause-290993 kubelet[3203]: E0224 13:11:07.117499    3203 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1740402667115717189,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 24 13:11:07 pause-290993 kubelet[3203]: E0224 13:11:07.117558    3203 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1740402667115717189,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 24 13:11:11 pause-290993 kubelet[3203]: I0224 13:11:11.124252    3203 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Feb 24 13:11:17 pause-290993 kubelet[3203]: E0224 13:11:17.119601    3203 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1740402677119086405,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 24 13:11:17 pause-290993 kubelet[3203]: E0224 13:11:17.119664    3203 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1740402677119086405,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-290993 -n pause-290993
helpers_test.go:261: (dbg) Run:  kubectl --context pause-290993 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (65.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (287.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-233759 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-233759 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m47.394480408s)

                                                
                                                
-- stdout --
	* [old-k8s-version-233759] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20451
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20451-887294/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-887294/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-233759" primary control-plane node in "old-k8s-version-233759" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0224 13:17:15.742460  945004 out.go:345] Setting OutFile to fd 1 ...
	I0224 13:17:15.742593  945004 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 13:17:15.742603  945004 out.go:358] Setting ErrFile to fd 2...
	I0224 13:17:15.742607  945004 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 13:17:15.742821  945004 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-887294/.minikube/bin
	I0224 13:17:15.743454  945004 out.go:352] Setting JSON to false
	I0224 13:17:15.744645  945004 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10777,"bootTime":1740392259,"procs":297,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0224 13:17:15.744757  945004 start.go:139] virtualization: kvm guest
	I0224 13:17:15.747180  945004 out.go:177] * [old-k8s-version-233759] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0224 13:17:15.748552  945004 out.go:177]   - MINIKUBE_LOCATION=20451
	I0224 13:17:15.748567  945004 notify.go:220] Checking for updates...
	I0224 13:17:15.751586  945004 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 13:17:15.753222  945004 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20451-887294/kubeconfig
	I0224 13:17:15.754512  945004 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-887294/.minikube
	I0224 13:17:15.755914  945004 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0224 13:17:15.757300  945004 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0224 13:17:15.759178  945004 config.go:182] Loaded profile config "bridge-799329": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0224 13:17:15.759324  945004 config.go:182] Loaded profile config "enable-default-cni-799329": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0224 13:17:15.759450  945004 config.go:182] Loaded profile config "flannel-799329": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0224 13:17:15.759619  945004 driver.go:394] Setting default libvirt URI to qemu:///system
	I0224 13:17:15.799815  945004 out.go:177] * Using the kvm2 driver based on user configuration
	I0224 13:17:15.801531  945004 start.go:297] selected driver: kvm2
	I0224 13:17:15.801566  945004 start.go:901] validating driver "kvm2" against <nil>
	I0224 13:17:15.801583  945004 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0224 13:17:15.802332  945004 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 13:17:15.802435  945004 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20451-887294/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0224 13:17:15.819290  945004 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0224 13:17:15.819357  945004 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0224 13:17:15.819738  945004 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0224 13:17:15.819804  945004 cni.go:84] Creating CNI manager for ""
	I0224 13:17:15.819873  945004 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0224 13:17:15.819886  945004 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0224 13:17:15.819966  945004 start.go:340] cluster config:
	{Name:old-k8s-version-233759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-233759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0224 13:17:15.820126  945004 iso.go:125] acquiring lock: {Name:mk57408cca66a96a13d93cda43cdfac6e61aef3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 13:17:15.822177  945004 out.go:177] * Starting "old-k8s-version-233759" primary control-plane node in "old-k8s-version-233759" cluster
	I0224 13:17:15.823470  945004 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0224 13:17:15.823527  945004 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20451-887294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0224 13:17:15.823542  945004 cache.go:56] Caching tarball of preloaded images
	I0224 13:17:15.823681  945004 preload.go:172] Found /home/jenkins/minikube-integration/20451-887294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0224 13:17:15.823697  945004 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0224 13:17:15.823808  945004 profile.go:143] Saving config to /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/old-k8s-version-233759/config.json ...
	I0224 13:17:15.823837  945004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/old-k8s-version-233759/config.json: {Name:mk0170c135264270baf9117fb286234a29f0cbf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 13:17:15.823992  945004 start.go:360] acquireMachinesLock for old-k8s-version-233759: {Name:mk023761b01bb629a1acd40bc8104cc517b0e15b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0224 13:17:27.858912  945004 start.go:364] duration metric: took 12.034859607s to acquireMachinesLock for "old-k8s-version-233759"
	I0224 13:17:27.859042  945004 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-233759 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 C
lusterName:old-k8s-version-233759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0224 13:17:27.859186  945004 start.go:125] createHost starting for "" (driver="kvm2")
	I0224 13:17:27.861274  945004 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0224 13:17:27.861585  945004 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:17:27.861697  945004 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:17:27.879388  945004 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42577
	I0224 13:17:27.879834  945004 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:17:27.880502  945004 main.go:141] libmachine: Using API Version  1
	I0224 13:17:27.880528  945004 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:17:27.880884  945004 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:17:27.881094  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetMachineName
	I0224 13:17:27.881237  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .DriverName
	I0224 13:17:27.881457  945004 start.go:159] libmachine.API.Create for "old-k8s-version-233759" (driver="kvm2")
	I0224 13:17:27.881497  945004 client.go:168] LocalClient.Create starting
	I0224 13:17:27.881537  945004 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem
	I0224 13:17:27.881594  945004 main.go:141] libmachine: Decoding PEM data...
	I0224 13:17:27.881616  945004 main.go:141] libmachine: Parsing certificate...
	I0224 13:17:27.881689  945004 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20451-887294/.minikube/certs/cert.pem
	I0224 13:17:27.881716  945004 main.go:141] libmachine: Decoding PEM data...
	I0224 13:17:27.881735  945004 main.go:141] libmachine: Parsing certificate...
	I0224 13:17:27.881761  945004 main.go:141] libmachine: Running pre-create checks...
	I0224 13:17:27.881774  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .PreCreateCheck
	I0224 13:17:27.882156  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetConfigRaw
	I0224 13:17:27.882680  945004 main.go:141] libmachine: Creating machine...
	I0224 13:17:27.882698  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .Create
	I0224 13:17:27.882864  945004 main.go:141] libmachine: (old-k8s-version-233759) creating KVM machine...
	I0224 13:17:27.882881  945004 main.go:141] libmachine: (old-k8s-version-233759) creating network...
	I0224 13:17:27.884266  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | found existing default KVM network
	I0224 13:17:27.885834  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | I0224 13:17:27.885623  945120 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:97:6f:50} reservation:<nil>}
	I0224 13:17:27.887097  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | I0224 13:17:27.887000  945120 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000286a40}
	I0224 13:17:27.887124  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | created network xml: 
	I0224 13:17:27.887133  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | <network>
	I0224 13:17:27.887145  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG |   <name>mk-old-k8s-version-233759</name>
	I0224 13:17:27.887157  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG |   <dns enable='no'/>
	I0224 13:17:27.887165  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG |   
	I0224 13:17:27.887174  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0224 13:17:27.887192  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG |     <dhcp>
	I0224 13:17:27.887210  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0224 13:17:27.887215  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG |     </dhcp>
	I0224 13:17:27.887219  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG |   </ip>
	I0224 13:17:27.887227  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG |   
	I0224 13:17:27.887233  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | </network>
	I0224 13:17:27.887243  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | 
	I0224 13:17:27.893594  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | trying to create private KVM network mk-old-k8s-version-233759 192.168.50.0/24...
	I0224 13:17:27.987269  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | private KVM network mk-old-k8s-version-233759 192.168.50.0/24 created
	I0224 13:17:27.987323  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | I0224 13:17:27.987227  945120 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20451-887294/.minikube
	I0224 13:17:27.987343  945004 main.go:141] libmachine: (old-k8s-version-233759) setting up store path in /home/jenkins/minikube-integration/20451-887294/.minikube/machines/old-k8s-version-233759 ...
	I0224 13:17:27.987356  945004 main.go:141] libmachine: (old-k8s-version-233759) building disk image from file:///home/jenkins/minikube-integration/20451-887294/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0224 13:17:27.987426  945004 main.go:141] libmachine: (old-k8s-version-233759) Downloading /home/jenkins/minikube-integration/20451-887294/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20451-887294/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0224 13:17:28.295831  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | I0224 13:17:28.295676  945120 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20451-887294/.minikube/machines/old-k8s-version-233759/id_rsa...
	I0224 13:17:28.350938  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | I0224 13:17:28.350740  945120 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20451-887294/.minikube/machines/old-k8s-version-233759/old-k8s-version-233759.rawdisk...
	I0224 13:17:28.350980  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | Writing magic tar header
	I0224 13:17:28.351002  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | Writing SSH key tar header
	I0224 13:17:28.351014  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | I0224 13:17:28.350938  945120 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20451-887294/.minikube/machines/old-k8s-version-233759 ...
	I0224 13:17:28.351071  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20451-887294/.minikube/machines/old-k8s-version-233759
	I0224 13:17:28.351182  945004 main.go:141] libmachine: (old-k8s-version-233759) setting executable bit set on /home/jenkins/minikube-integration/20451-887294/.minikube/machines/old-k8s-version-233759 (perms=drwx------)
	I0224 13:17:28.351214  945004 main.go:141] libmachine: (old-k8s-version-233759) setting executable bit set on /home/jenkins/minikube-integration/20451-887294/.minikube/machines (perms=drwxr-xr-x)
	I0224 13:17:28.351227  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20451-887294/.minikube/machines
	I0224 13:17:28.351242  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20451-887294/.minikube
	I0224 13:17:28.351256  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20451-887294
	I0224 13:17:28.351285  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0224 13:17:28.351303  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | checking permissions on dir: /home/jenkins
	I0224 13:17:28.351315  945004 main.go:141] libmachine: (old-k8s-version-233759) setting executable bit set on /home/jenkins/minikube-integration/20451-887294/.minikube (perms=drwxr-xr-x)
	I0224 13:17:28.351333  945004 main.go:141] libmachine: (old-k8s-version-233759) setting executable bit set on /home/jenkins/minikube-integration/20451-887294 (perms=drwxrwxr-x)
	I0224 13:17:28.351346  945004 main.go:141] libmachine: (old-k8s-version-233759) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0224 13:17:28.351360  945004 main.go:141] libmachine: (old-k8s-version-233759) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0224 13:17:28.351371  945004 main.go:141] libmachine: (old-k8s-version-233759) creating domain...
	I0224 13:17:28.351384  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | checking permissions on dir: /home
	I0224 13:17:28.351401  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | skipping /home - not owner
	I0224 13:17:28.353011  945004 main.go:141] libmachine: (old-k8s-version-233759) define libvirt domain using xml: 
	I0224 13:17:28.353042  945004 main.go:141] libmachine: (old-k8s-version-233759) <domain type='kvm'>
	I0224 13:17:28.353055  945004 main.go:141] libmachine: (old-k8s-version-233759)   <name>old-k8s-version-233759</name>
	I0224 13:17:28.353062  945004 main.go:141] libmachine: (old-k8s-version-233759)   <memory unit='MiB'>2200</memory>
	I0224 13:17:28.353097  945004 main.go:141] libmachine: (old-k8s-version-233759)   <vcpu>2</vcpu>
	I0224 13:17:28.353104  945004 main.go:141] libmachine: (old-k8s-version-233759)   <features>
	I0224 13:17:28.353111  945004 main.go:141] libmachine: (old-k8s-version-233759)     <acpi/>
	I0224 13:17:28.353119  945004 main.go:141] libmachine: (old-k8s-version-233759)     <apic/>
	I0224 13:17:28.353128  945004 main.go:141] libmachine: (old-k8s-version-233759)     <pae/>
	I0224 13:17:28.353139  945004 main.go:141] libmachine: (old-k8s-version-233759)     
	I0224 13:17:28.353147  945004 main.go:141] libmachine: (old-k8s-version-233759)   </features>
	I0224 13:17:28.353157  945004 main.go:141] libmachine: (old-k8s-version-233759)   <cpu mode='host-passthrough'>
	I0224 13:17:28.353165  945004 main.go:141] libmachine: (old-k8s-version-233759)   
	I0224 13:17:28.353174  945004 main.go:141] libmachine: (old-k8s-version-233759)   </cpu>
	I0224 13:17:28.353181  945004 main.go:141] libmachine: (old-k8s-version-233759)   <os>
	I0224 13:17:28.353188  945004 main.go:141] libmachine: (old-k8s-version-233759)     <type>hvm</type>
	I0224 13:17:28.353195  945004 main.go:141] libmachine: (old-k8s-version-233759)     <boot dev='cdrom'/>
	I0224 13:17:28.353202  945004 main.go:141] libmachine: (old-k8s-version-233759)     <boot dev='hd'/>
	I0224 13:17:28.353211  945004 main.go:141] libmachine: (old-k8s-version-233759)     <bootmenu enable='no'/>
	I0224 13:17:28.353245  945004 main.go:141] libmachine: (old-k8s-version-233759)   </os>
	I0224 13:17:28.353258  945004 main.go:141] libmachine: (old-k8s-version-233759)   <devices>
	I0224 13:17:28.353264  945004 main.go:141] libmachine: (old-k8s-version-233759)     <disk type='file' device='cdrom'>
	I0224 13:17:28.353281  945004 main.go:141] libmachine: (old-k8s-version-233759)       <source file='/home/jenkins/minikube-integration/20451-887294/.minikube/machines/old-k8s-version-233759/boot2docker.iso'/>
	I0224 13:17:28.353292  945004 main.go:141] libmachine: (old-k8s-version-233759)       <target dev='hdc' bus='scsi'/>
	I0224 13:17:28.353300  945004 main.go:141] libmachine: (old-k8s-version-233759)       <readonly/>
	I0224 13:17:28.353339  945004 main.go:141] libmachine: (old-k8s-version-233759)     </disk>
	I0224 13:17:28.353349  945004 main.go:141] libmachine: (old-k8s-version-233759)     <disk type='file' device='disk'>
	I0224 13:17:28.353363  945004 main.go:141] libmachine: (old-k8s-version-233759)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0224 13:17:28.353382  945004 main.go:141] libmachine: (old-k8s-version-233759)       <source file='/home/jenkins/minikube-integration/20451-887294/.minikube/machines/old-k8s-version-233759/old-k8s-version-233759.rawdisk'/>
	I0224 13:17:28.353393  945004 main.go:141] libmachine: (old-k8s-version-233759)       <target dev='hda' bus='virtio'/>
	I0224 13:17:28.353405  945004 main.go:141] libmachine: (old-k8s-version-233759)     </disk>
	I0224 13:17:28.353412  945004 main.go:141] libmachine: (old-k8s-version-233759)     <interface type='network'>
	I0224 13:17:28.353421  945004 main.go:141] libmachine: (old-k8s-version-233759)       <source network='mk-old-k8s-version-233759'/>
	I0224 13:17:28.353433  945004 main.go:141] libmachine: (old-k8s-version-233759)       <model type='virtio'/>
	I0224 13:17:28.353442  945004 main.go:141] libmachine: (old-k8s-version-233759)     </interface>
	I0224 13:17:28.353449  945004 main.go:141] libmachine: (old-k8s-version-233759)     <interface type='network'>
	I0224 13:17:28.353459  945004 main.go:141] libmachine: (old-k8s-version-233759)       <source network='default'/>
	I0224 13:17:28.353475  945004 main.go:141] libmachine: (old-k8s-version-233759)       <model type='virtio'/>
	I0224 13:17:28.353483  945004 main.go:141] libmachine: (old-k8s-version-233759)     </interface>
	I0224 13:17:28.353490  945004 main.go:141] libmachine: (old-k8s-version-233759)     <serial type='pty'>
	I0224 13:17:28.353497  945004 main.go:141] libmachine: (old-k8s-version-233759)       <target port='0'/>
	I0224 13:17:28.353503  945004 main.go:141] libmachine: (old-k8s-version-233759)     </serial>
	I0224 13:17:28.353510  945004 main.go:141] libmachine: (old-k8s-version-233759)     <console type='pty'>
	I0224 13:17:28.353517  945004 main.go:141] libmachine: (old-k8s-version-233759)       <target type='serial' port='0'/>
	I0224 13:17:28.353525  945004 main.go:141] libmachine: (old-k8s-version-233759)     </console>
	I0224 13:17:28.353532  945004 main.go:141] libmachine: (old-k8s-version-233759)     <rng model='virtio'>
	I0224 13:17:28.353541  945004 main.go:141] libmachine: (old-k8s-version-233759)       <backend model='random'>/dev/random</backend>
	I0224 13:17:28.353546  945004 main.go:141] libmachine: (old-k8s-version-233759)     </rng>
	I0224 13:17:28.353553  945004 main.go:141] libmachine: (old-k8s-version-233759)     
	I0224 13:17:28.353560  945004 main.go:141] libmachine: (old-k8s-version-233759)     
	I0224 13:17:28.353568  945004 main.go:141] libmachine: (old-k8s-version-233759)   </devices>
	I0224 13:17:28.353575  945004 main.go:141] libmachine: (old-k8s-version-233759) </domain>
	I0224 13:17:28.353586  945004 main.go:141] libmachine: (old-k8s-version-233759) 
	I0224 13:17:28.358316  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:db:a8:2a in network default
	I0224 13:17:28.359036  945004 main.go:141] libmachine: (old-k8s-version-233759) starting domain...
	I0224 13:17:28.359060  945004 main.go:141] libmachine: (old-k8s-version-233759) ensuring networks are active...
	I0224 13:17:28.359073  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:28.359965  945004 main.go:141] libmachine: (old-k8s-version-233759) Ensuring network default is active
	I0224 13:17:28.360284  945004 main.go:141] libmachine: (old-k8s-version-233759) Ensuring network mk-old-k8s-version-233759 is active
	I0224 13:17:28.360917  945004 main.go:141] libmachine: (old-k8s-version-233759) getting domain XML...
	I0224 13:17:28.361871  945004 main.go:141] libmachine: (old-k8s-version-233759) creating domain...
	I0224 13:17:29.886797  945004 main.go:141] libmachine: (old-k8s-version-233759) waiting for IP...
	I0224 13:17:29.887724  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:29.888350  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | unable to find current IP address of domain old-k8s-version-233759 in network mk-old-k8s-version-233759
	I0224 13:17:29.888445  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | I0224 13:17:29.888339  945120 retry.go:31] will retry after 303.200538ms: waiting for domain to come up
	I0224 13:17:30.192931  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:30.193619  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | unable to find current IP address of domain old-k8s-version-233759 in network mk-old-k8s-version-233759
	I0224 13:17:30.193652  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | I0224 13:17:30.193536  945120 retry.go:31] will retry after 283.160935ms: waiting for domain to come up
	I0224 13:17:30.479651  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:30.481371  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | unable to find current IP address of domain old-k8s-version-233759 in network mk-old-k8s-version-233759
	I0224 13:17:30.481503  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | I0224 13:17:30.481223  945120 retry.go:31] will retry after 431.155314ms: waiting for domain to come up
	I0224 13:17:30.919957  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:30.921790  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | unable to find current IP address of domain old-k8s-version-233759 in network mk-old-k8s-version-233759
	I0224 13:17:30.921820  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | I0224 13:17:30.921488  945120 retry.go:31] will retry after 405.534023ms: waiting for domain to come up
	I0224 13:17:31.328495  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:31.329165  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | unable to find current IP address of domain old-k8s-version-233759 in network mk-old-k8s-version-233759
	I0224 13:17:31.329195  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | I0224 13:17:31.329073  945120 retry.go:31] will retry after 688.860859ms: waiting for domain to come up
	I0224 13:17:32.019628  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:32.020357  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | unable to find current IP address of domain old-k8s-version-233759 in network mk-old-k8s-version-233759
	I0224 13:17:32.020397  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | I0224 13:17:32.020249  945120 retry.go:31] will retry after 689.039505ms: waiting for domain to come up
	I0224 13:17:32.710929  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:32.711886  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | unable to find current IP address of domain old-k8s-version-233759 in network mk-old-k8s-version-233759
	I0224 13:17:32.711922  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | I0224 13:17:32.711826  945120 retry.go:31] will retry after 943.914165ms: waiting for domain to come up
	I0224 13:17:33.657784  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:33.658421  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | unable to find current IP address of domain old-k8s-version-233759 in network mk-old-k8s-version-233759
	I0224 13:17:33.658454  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | I0224 13:17:33.658369  945120 retry.go:31] will retry after 1.170515464s: waiting for domain to come up
	I0224 13:17:34.830568  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:34.831166  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | unable to find current IP address of domain old-k8s-version-233759 in network mk-old-k8s-version-233759
	I0224 13:17:34.831195  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | I0224 13:17:34.831109  945120 retry.go:31] will retry after 1.769736633s: waiting for domain to come up
	I0224 13:17:36.602705  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:36.603308  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | unable to find current IP address of domain old-k8s-version-233759 in network mk-old-k8s-version-233759
	I0224 13:17:36.603369  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | I0224 13:17:36.603269  945120 retry.go:31] will retry after 2.127975088s: waiting for domain to come up
	I0224 13:17:38.732692  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:38.733297  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | unable to find current IP address of domain old-k8s-version-233759 in network mk-old-k8s-version-233759
	I0224 13:17:38.733395  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | I0224 13:17:38.733288  945120 retry.go:31] will retry after 2.009247602s: waiting for domain to come up
	I0224 13:17:40.745296  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:40.745911  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | unable to find current IP address of domain old-k8s-version-233759 in network mk-old-k8s-version-233759
	I0224 13:17:40.746000  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | I0224 13:17:40.745906  945120 retry.go:31] will retry after 3.432197671s: waiting for domain to come up
	I0224 13:17:44.179450  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:44.179938  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | unable to find current IP address of domain old-k8s-version-233759 in network mk-old-k8s-version-233759
	I0224 13:17:44.179968  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | I0224 13:17:44.179894  945120 retry.go:31] will retry after 2.947733435s: waiting for domain to come up
	I0224 13:17:47.131058  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:47.131623  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | unable to find current IP address of domain old-k8s-version-233759 in network mk-old-k8s-version-233759
	I0224 13:17:47.131658  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | I0224 13:17:47.131601  945120 retry.go:31] will retry after 4.839258566s: waiting for domain to come up
	I0224 13:17:51.978765  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:51.979917  945004 main.go:141] libmachine: (old-k8s-version-233759) found domain IP: 192.168.50.62
	I0224 13:17:51.979939  945004 main.go:141] libmachine: (old-k8s-version-233759) reserving static IP address...
	I0224 13:17:51.979992  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has current primary IP address 192.168.50.62 and MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:51.980907  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-233759", mac: "52:54:00:cd:a9:f6", ip: "192.168.50.62"} in network mk-old-k8s-version-233759
	I0224 13:17:52.141888  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | Getting to WaitForSSH function...
	I0224 13:17:52.141947  945004 main.go:141] libmachine: (old-k8s-version-233759) reserved static IP address 192.168.50.62 for domain old-k8s-version-233759
	I0224 13:17:52.141970  945004 main.go:141] libmachine: (old-k8s-version-233759) waiting for SSH...
	I0224 13:17:52.160494  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:52.160530  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:a9:f6", ip: ""} in network mk-old-k8s-version-233759: {Iface:virbr2 ExpiryTime:2025-02-24 14:17:45 +0000 UTC Type:0 Mac:52:54:00:cd:a9:f6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:minikube Clientid:01:52:54:00:cd:a9:f6}
	I0224 13:17:52.160549  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined IP address 192.168.50.62 and MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:52.161736  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | Using SSH client type: external
	I0224 13:17:52.161764  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | Using SSH private key: /home/jenkins/minikube-integration/20451-887294/.minikube/machines/old-k8s-version-233759/id_rsa (-rw-------)
	I0224 13:17:52.161796  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.62 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20451-887294/.minikube/machines/old-k8s-version-233759/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0224 13:17:52.161805  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | About to run SSH command:
	I0224 13:17:52.161817  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | exit 0
	I0224 13:17:52.314521  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | SSH cmd err, output: <nil>: 
	I0224 13:17:52.314807  945004 main.go:141] libmachine: (old-k8s-version-233759) KVM machine creation complete
	I0224 13:17:52.315225  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetConfigRaw
	I0224 13:17:52.316095  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .DriverName
	I0224 13:17:52.316392  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .DriverName
	I0224 13:17:52.316640  945004 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0224 13:17:52.316660  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetState
	I0224 13:17:52.319149  945004 main.go:141] libmachine: Detecting operating system of created instance...
	I0224 13:17:52.319168  945004 main.go:141] libmachine: Waiting for SSH to be available...
	I0224 13:17:52.319175  945004 main.go:141] libmachine: Getting to WaitForSSH function...
	I0224 13:17:52.319183  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHHostname
	I0224 13:17:52.325888  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:52.327199  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:a9:f6", ip: ""} in network mk-old-k8s-version-233759: {Iface:virbr2 ExpiryTime:2025-02-24 14:17:45 +0000 UTC Type:0 Mac:52:54:00:cd:a9:f6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:old-k8s-version-233759 Clientid:01:52:54:00:cd:a9:f6}
	I0224 13:17:52.327231  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined IP address 192.168.50.62 and MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:52.329576  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHPort
	I0224 13:17:52.329810  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHKeyPath
	I0224 13:17:52.329957  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHKeyPath
	I0224 13:17:52.330130  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHUsername
	I0224 13:17:52.330302  945004 main.go:141] libmachine: Using SSH client type: native
	I0224 13:17:52.330575  945004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I0224 13:17:52.330585  945004 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0224 13:17:52.471197  945004 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0224 13:17:52.471231  945004 main.go:141] libmachine: Detecting the provisioner...
	I0224 13:17:52.471243  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHHostname
	I0224 13:17:52.475214  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:52.475811  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:a9:f6", ip: ""} in network mk-old-k8s-version-233759: {Iface:virbr2 ExpiryTime:2025-02-24 14:17:45 +0000 UTC Type:0 Mac:52:54:00:cd:a9:f6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:old-k8s-version-233759 Clientid:01:52:54:00:cd:a9:f6}
	I0224 13:17:52.475848  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined IP address 192.168.50.62 and MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:52.476182  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHPort
	I0224 13:17:52.476376  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHKeyPath
	I0224 13:17:52.476636  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHKeyPath
	I0224 13:17:52.476813  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHUsername
	I0224 13:17:52.477010  945004 main.go:141] libmachine: Using SSH client type: native
	I0224 13:17:52.477252  945004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I0224 13:17:52.477265  945004 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0224 13:17:52.662099  945004 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0224 13:17:52.662194  945004 main.go:141] libmachine: found compatible host: buildroot
	I0224 13:17:52.662204  945004 main.go:141] libmachine: Provisioning with buildroot...
	I0224 13:17:52.662216  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetMachineName
	I0224 13:17:52.662524  945004 buildroot.go:166] provisioning hostname "old-k8s-version-233759"
	I0224 13:17:52.662551  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetMachineName
	I0224 13:17:52.662732  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHHostname
	I0224 13:17:52.668638  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:52.669352  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:a9:f6", ip: ""} in network mk-old-k8s-version-233759: {Iface:virbr2 ExpiryTime:2025-02-24 14:17:45 +0000 UTC Type:0 Mac:52:54:00:cd:a9:f6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:old-k8s-version-233759 Clientid:01:52:54:00:cd:a9:f6}
	I0224 13:17:52.669379  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined IP address 192.168.50.62 and MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:52.669631  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHPort
	I0224 13:17:52.669813  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHKeyPath
	I0224 13:17:52.669960  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHKeyPath
	I0224 13:17:52.670074  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHUsername
	I0224 13:17:52.670238  945004 main.go:141] libmachine: Using SSH client type: native
	I0224 13:17:52.670490  945004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I0224 13:17:52.670503  945004 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-233759 && echo "old-k8s-version-233759" | sudo tee /etc/hostname
	I0224 13:17:52.842524  945004 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-233759
	
	I0224 13:17:52.842564  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHHostname
	I0224 13:17:52.846380  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:52.846869  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:a9:f6", ip: ""} in network mk-old-k8s-version-233759: {Iface:virbr2 ExpiryTime:2025-02-24 14:17:45 +0000 UTC Type:0 Mac:52:54:00:cd:a9:f6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:old-k8s-version-233759 Clientid:01:52:54:00:cd:a9:f6}
	I0224 13:17:52.846899  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined IP address 192.168.50.62 and MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:52.847439  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHPort
	I0224 13:17:52.847666  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHKeyPath
	I0224 13:17:52.847808  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHKeyPath
	I0224 13:17:52.847940  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHUsername
	I0224 13:17:52.848128  945004 main.go:141] libmachine: Using SSH client type: native
	I0224 13:17:52.848348  945004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I0224 13:17:52.848370  945004 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-233759' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-233759/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-233759' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0224 13:17:53.010876  945004 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0224 13:17:53.010912  945004 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20451-887294/.minikube CaCertPath:/home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20451-887294/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20451-887294/.minikube}
	I0224 13:17:53.010935  945004 buildroot.go:174] setting up certificates
	I0224 13:17:53.010949  945004 provision.go:84] configureAuth start
	I0224 13:17:53.010962  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetMachineName
	I0224 13:17:53.011310  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetIP
	I0224 13:17:53.049366  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:53.051782  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:a9:f6", ip: ""} in network mk-old-k8s-version-233759: {Iface:virbr2 ExpiryTime:2025-02-24 14:17:45 +0000 UTC Type:0 Mac:52:54:00:cd:a9:f6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:old-k8s-version-233759 Clientid:01:52:54:00:cd:a9:f6}
	I0224 13:17:53.051830  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined IP address 192.168.50.62 and MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:53.052168  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHHostname
	I0224 13:17:53.055074  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:53.056030  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:a9:f6", ip: ""} in network mk-old-k8s-version-233759: {Iface:virbr2 ExpiryTime:2025-02-24 14:17:45 +0000 UTC Type:0 Mac:52:54:00:cd:a9:f6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:old-k8s-version-233759 Clientid:01:52:54:00:cd:a9:f6}
	I0224 13:17:53.056058  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined IP address 192.168.50.62 and MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:53.056311  945004 provision.go:143] copyHostCerts
	I0224 13:17:53.056381  945004 exec_runner.go:144] found /home/jenkins/minikube-integration/20451-887294/.minikube/key.pem, removing ...
	I0224 13:17:53.056391  945004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20451-887294/.minikube/key.pem
	I0224 13:17:53.056475  945004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20451-887294/.minikube/key.pem (1679 bytes)
	I0224 13:17:53.056612  945004 exec_runner.go:144] found /home/jenkins/minikube-integration/20451-887294/.minikube/ca.pem, removing ...
	I0224 13:17:53.056623  945004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20451-887294/.minikube/ca.pem
	I0224 13:17:53.056656  945004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20451-887294/.minikube/ca.pem (1082 bytes)
	I0224 13:17:53.056725  945004 exec_runner.go:144] found /home/jenkins/minikube-integration/20451-887294/.minikube/cert.pem, removing ...
	I0224 13:17:53.056731  945004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20451-887294/.minikube/cert.pem
	I0224 13:17:53.056753  945004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20451-887294/.minikube/cert.pem (1123 bytes)
	I0224 13:17:53.056819  945004 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20451-887294/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-233759 san=[127.0.0.1 192.168.50.62 localhost minikube old-k8s-version-233759]
	I0224 13:17:53.306354  945004 provision.go:177] copyRemoteCerts
	I0224 13:17:53.306479  945004 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0224 13:17:53.306537  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHHostname
	I0224 13:17:53.309920  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:53.310434  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:a9:f6", ip: ""} in network mk-old-k8s-version-233759: {Iface:virbr2 ExpiryTime:2025-02-24 14:17:45 +0000 UTC Type:0 Mac:52:54:00:cd:a9:f6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:old-k8s-version-233759 Clientid:01:52:54:00:cd:a9:f6}
	I0224 13:17:53.310485  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined IP address 192.168.50.62 and MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:53.310720  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHPort
	I0224 13:17:53.310916  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHKeyPath
	I0224 13:17:53.311067  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHUsername
	I0224 13:17:53.311219  945004 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/old-k8s-version-233759/id_rsa Username:docker}
	I0224 13:17:53.448296  945004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0224 13:17:53.499693  945004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0224 13:17:53.545730  945004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0224 13:17:53.598012  945004 provision.go:87] duration metric: took 587.047358ms to configureAuth
	I0224 13:17:53.598048  945004 buildroot.go:189] setting minikube options for container-runtime
	I0224 13:17:53.598298  945004 config.go:182] Loaded profile config "old-k8s-version-233759": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0224 13:17:53.598396  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHHostname
	I0224 13:17:53.604979  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:53.605661  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:a9:f6", ip: ""} in network mk-old-k8s-version-233759: {Iface:virbr2 ExpiryTime:2025-02-24 14:17:45 +0000 UTC Type:0 Mac:52:54:00:cd:a9:f6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:old-k8s-version-233759 Clientid:01:52:54:00:cd:a9:f6}
	I0224 13:17:53.605694  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined IP address 192.168.50.62 and MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:53.606083  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHPort
	I0224 13:17:53.606293  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHKeyPath
	I0224 13:17:53.606452  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHKeyPath
	I0224 13:17:53.606587  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHUsername
	I0224 13:17:53.606751  945004 main.go:141] libmachine: Using SSH client type: native
	I0224 13:17:53.607004  945004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I0224 13:17:53.607035  945004 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0224 13:17:53.953926  945004 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0224 13:17:53.953957  945004 main.go:141] libmachine: Checking connection to Docker...
	I0224 13:17:53.953968  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetURL
	I0224 13:17:53.958647  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | using libvirt version 6000000
	I0224 13:17:53.961627  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:53.962069  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:a9:f6", ip: ""} in network mk-old-k8s-version-233759: {Iface:virbr2 ExpiryTime:2025-02-24 14:17:45 +0000 UTC Type:0 Mac:52:54:00:cd:a9:f6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:old-k8s-version-233759 Clientid:01:52:54:00:cd:a9:f6}
	I0224 13:17:53.962096  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined IP address 192.168.50.62 and MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:53.962516  945004 main.go:141] libmachine: Docker is up and running!
	I0224 13:17:53.962532  945004 main.go:141] libmachine: Reticulating splines...
	I0224 13:17:53.962540  945004 client.go:171] duration metric: took 26.081034577s to LocalClient.Create
	I0224 13:17:53.962565  945004 start.go:167] duration metric: took 26.081111938s to libmachine.API.Create "old-k8s-version-233759"
	I0224 13:17:53.962578  945004 start.go:293] postStartSetup for "old-k8s-version-233759" (driver="kvm2")
	I0224 13:17:53.962594  945004 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0224 13:17:53.962624  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .DriverName
	I0224 13:17:53.962956  945004 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0224 13:17:53.962988  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHHostname
	I0224 13:17:53.967171  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:53.967658  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:a9:f6", ip: ""} in network mk-old-k8s-version-233759: {Iface:virbr2 ExpiryTime:2025-02-24 14:17:45 +0000 UTC Type:0 Mac:52:54:00:cd:a9:f6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:old-k8s-version-233759 Clientid:01:52:54:00:cd:a9:f6}
	I0224 13:17:53.967693  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined IP address 192.168.50.62 and MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:53.967914  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHPort
	I0224 13:17:53.968096  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHKeyPath
	I0224 13:17:53.968248  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHUsername
	I0224 13:17:53.968349  945004 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/old-k8s-version-233759/id_rsa Username:docker}
	I0224 13:17:54.114753  945004 ssh_runner.go:195] Run: cat /etc/os-release
	I0224 13:17:54.133682  945004 info.go:137] Remote host: Buildroot 2023.02.9
	I0224 13:17:54.133719  945004 filesync.go:126] Scanning /home/jenkins/minikube-integration/20451-887294/.minikube/addons for local assets ...
	I0224 13:17:54.133824  945004 filesync.go:126] Scanning /home/jenkins/minikube-integration/20451-887294/.minikube/files for local assets ...
	I0224 13:17:54.133924  945004 filesync.go:149] local asset: /home/jenkins/minikube-integration/20451-887294/.minikube/files/etc/ssl/certs/8945642.pem -> 8945642.pem in /etc/ssl/certs
	I0224 13:17:54.134055  945004 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0224 13:17:54.158449  945004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/files/etc/ssl/certs/8945642.pem --> /etc/ssl/certs/8945642.pem (1708 bytes)
	I0224 13:17:54.203188  945004 start.go:296] duration metric: took 240.588266ms for postStartSetup
	I0224 13:17:54.203256  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetConfigRaw
	I0224 13:17:54.204135  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetIP
	I0224 13:17:54.208651  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:54.209357  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:a9:f6", ip: ""} in network mk-old-k8s-version-233759: {Iface:virbr2 ExpiryTime:2025-02-24 14:17:45 +0000 UTC Type:0 Mac:52:54:00:cd:a9:f6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:old-k8s-version-233759 Clientid:01:52:54:00:cd:a9:f6}
	I0224 13:17:54.209391  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined IP address 192.168.50.62 and MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:54.209823  945004 profile.go:143] Saving config to /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/old-k8s-version-233759/config.json ...
	I0224 13:17:54.210100  945004 start.go:128] duration metric: took 26.350893185s to createHost
	I0224 13:17:54.210131  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHHostname
	I0224 13:17:54.213441  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:54.213893  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:a9:f6", ip: ""} in network mk-old-k8s-version-233759: {Iface:virbr2 ExpiryTime:2025-02-24 14:17:45 +0000 UTC Type:0 Mac:52:54:00:cd:a9:f6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:old-k8s-version-233759 Clientid:01:52:54:00:cd:a9:f6}
	I0224 13:17:54.213926  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined IP address 192.168.50.62 and MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:54.214192  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHPort
	I0224 13:17:54.214421  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHKeyPath
	I0224 13:17:54.214616  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHKeyPath
	I0224 13:17:54.214787  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHUsername
	I0224 13:17:54.215010  945004 main.go:141] libmachine: Using SSH client type: native
	I0224 13:17:54.215246  945004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I0224 13:17:54.215264  945004 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0224 13:17:54.373901  945004 main.go:141] libmachine: SSH cmd err, output: <nil>: 1740403074.350152857
	
	I0224 13:17:54.373927  945004 fix.go:216] guest clock: 1740403074.350152857
	I0224 13:17:54.373935  945004 fix.go:229] Guest: 2025-02-24 13:17:54.350152857 +0000 UTC Remote: 2025-02-24 13:17:54.210114228 +0000 UTC m=+38.509082179 (delta=140.038629ms)
	I0224 13:17:54.373961  945004 fix.go:200] guest clock delta is within tolerance: 140.038629ms
	I0224 13:17:54.373973  945004 start.go:83] releasing machines lock for "old-k8s-version-233759", held for 26.514978656s
	I0224 13:17:54.373999  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .DriverName
	I0224 13:17:54.376016  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetIP
	I0224 13:17:54.380430  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:54.380851  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:a9:f6", ip: ""} in network mk-old-k8s-version-233759: {Iface:virbr2 ExpiryTime:2025-02-24 14:17:45 +0000 UTC Type:0 Mac:52:54:00:cd:a9:f6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:old-k8s-version-233759 Clientid:01:52:54:00:cd:a9:f6}
	I0224 13:17:54.380889  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined IP address 192.168.50.62 and MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:54.381391  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .DriverName
	I0224 13:17:54.382027  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .DriverName
	I0224 13:17:54.382244  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .DriverName
	I0224 13:17:54.382318  945004 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0224 13:17:54.382370  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHHostname
	I0224 13:17:54.382618  945004 ssh_runner.go:195] Run: cat /version.json
	I0224 13:17:54.382650  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHHostname
	I0224 13:17:54.386743  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:54.387110  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:a9:f6", ip: ""} in network mk-old-k8s-version-233759: {Iface:virbr2 ExpiryTime:2025-02-24 14:17:45 +0000 UTC Type:0 Mac:52:54:00:cd:a9:f6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:old-k8s-version-233759 Clientid:01:52:54:00:cd:a9:f6}
	I0224 13:17:54.387139  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined IP address 192.168.50.62 and MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:54.387831  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHPort
	I0224 13:17:54.388430  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHKeyPath
	I0224 13:17:54.388634  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHUsername
	I0224 13:17:54.388810  945004 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/old-k8s-version-233759/id_rsa Username:docker}
	I0224 13:17:54.396179  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:54.396303  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:a9:f6", ip: ""} in network mk-old-k8s-version-233759: {Iface:virbr2 ExpiryTime:2025-02-24 14:17:45 +0000 UTC Type:0 Mac:52:54:00:cd:a9:f6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:old-k8s-version-233759 Clientid:01:52:54:00:cd:a9:f6}
	I0224 13:17:54.396334  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined IP address 192.168.50.62 and MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:54.396543  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHPort
	I0224 13:17:54.396787  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHKeyPath
	I0224 13:17:54.396970  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHUsername
	I0224 13:17:54.397183  945004 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/old-k8s-version-233759/id_rsa Username:docker}
	I0224 13:17:54.500508  945004 ssh_runner.go:195] Run: systemctl --version
	I0224 13:17:54.508444  945004 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0224 13:17:54.771322  945004 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0224 13:17:54.785669  945004 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0224 13:17:54.785753  945004 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0224 13:17:54.825480  945004 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0224 13:17:54.825514  945004 start.go:495] detecting cgroup driver to use...
	I0224 13:17:54.825597  945004 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0224 13:17:54.849931  945004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0224 13:17:54.870586  945004 docker.go:217] disabling cri-docker service (if available) ...
	I0224 13:17:54.870643  945004 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0224 13:17:54.889741  945004 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0224 13:17:54.910249  945004 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0224 13:17:55.092703  945004 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0224 13:17:55.309600  945004 docker.go:233] disabling docker service ...
	I0224 13:17:55.309682  945004 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0224 13:17:55.335052  945004 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0224 13:17:55.351706  945004 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0224 13:17:55.513009  945004 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0224 13:17:55.656811  945004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0224 13:17:55.673154  945004 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 13:17:55.699944  945004 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0224 13:17:55.700011  945004 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 13:17:55.714897  945004 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0224 13:17:55.714994  945004 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 13:17:55.727627  945004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 13:17:55.741411  945004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 13:17:55.753525  945004 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0224 13:17:55.765249  945004 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0224 13:17:55.775604  945004 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0224 13:17:55.775683  945004 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0224 13:17:55.791615  945004 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0224 13:17:55.803928  945004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 13:17:55.965378  945004 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0224 13:17:56.096758  945004 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0224 13:17:56.096841  945004 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0224 13:17:56.103941  945004 start.go:563] Will wait 60s for crictl version
	I0224 13:17:56.104011  945004 ssh_runner.go:195] Run: which crictl
	I0224 13:17:56.109077  945004 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0224 13:17:56.158314  945004 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0224 13:17:56.158424  945004 ssh_runner.go:195] Run: crio --version
	I0224 13:17:56.191215  945004 ssh_runner.go:195] Run: crio --version
	I0224 13:17:56.224896  945004 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0224 13:17:56.226406  945004 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetIP
	I0224 13:17:56.229684  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:56.230159  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:a9:f6", ip: ""} in network mk-old-k8s-version-233759: {Iface:virbr2 ExpiryTime:2025-02-24 14:17:45 +0000 UTC Type:0 Mac:52:54:00:cd:a9:f6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:old-k8s-version-233759 Clientid:01:52:54:00:cd:a9:f6}
	I0224 13:17:56.230192  945004 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined IP address 192.168.50.62 and MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:17:56.230460  945004 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0224 13:17:56.236322  945004 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 13:17:56.251231  945004 kubeadm.go:883] updating cluster {Name:old-k8s-version-233759 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-
version-233759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.62 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0224 13:17:56.251394  945004 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0224 13:17:56.251467  945004 ssh_runner.go:195] Run: sudo crictl images --output json
	I0224 13:17:56.291390  945004 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0224 13:17:56.291479  945004 ssh_runner.go:195] Run: which lz4
	I0224 13:17:56.295985  945004 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0224 13:17:56.301374  945004 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0224 13:17:56.301432  945004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0224 13:17:58.186983  945004 crio.go:462] duration metric: took 1.891039595s to copy over tarball
	I0224 13:17:58.187077  945004 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0224 13:18:01.187997  945004 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.000875101s)
	I0224 13:18:01.188045  945004 crio.go:469] duration metric: took 3.001022456s to extract the tarball
	I0224 13:18:01.188056  945004 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0224 13:18:01.235493  945004 ssh_runner.go:195] Run: sudo crictl images --output json
	I0224 13:18:01.291224  945004 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0224 13:18:01.291257  945004 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0224 13:18:01.291346  945004 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0224 13:18:01.291384  945004 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0224 13:18:01.291387  945004 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0224 13:18:01.291408  945004 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0224 13:18:01.291335  945004 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0224 13:18:01.291457  945004 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0224 13:18:01.291468  945004 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0224 13:18:01.291374  945004 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0224 13:18:01.292827  945004 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0224 13:18:01.292827  945004 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0224 13:18:01.292859  945004 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0224 13:18:01.292900  945004 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0224 13:18:01.292829  945004 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0224 13:18:01.292829  945004 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0224 13:18:01.292833  945004 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0224 13:18:01.292831  945004 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0224 13:18:01.467016  945004 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0224 13:18:01.474491  945004 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0224 13:18:01.489091  945004 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0224 13:18:01.511769  945004 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0224 13:18:01.529903  945004 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0224 13:18:01.531106  945004 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0224 13:18:01.541731  945004 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0224 13:18:01.578785  945004 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0224 13:18:01.578841  945004 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0224 13:18:01.578893  945004 ssh_runner.go:195] Run: which crictl
	I0224 13:18:01.579302  945004 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0224 13:18:01.579363  945004 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0224 13:18:01.579410  945004 ssh_runner.go:195] Run: which crictl
	I0224 13:18:01.673882  945004 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0224 13:18:01.673942  945004 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0224 13:18:01.673994  945004 ssh_runner.go:195] Run: which crictl
	I0224 13:18:01.694367  945004 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0224 13:18:01.694417  945004 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0224 13:18:01.694461  945004 ssh_runner.go:195] Run: which crictl
	I0224 13:18:01.708313  945004 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0224 13:18:01.708363  945004 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0224 13:18:01.708393  945004 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0224 13:18:01.708420  945004 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0224 13:18:01.708431  945004 ssh_runner.go:195] Run: which crictl
	I0224 13:18:01.708447  945004 ssh_runner.go:195] Run: which crictl
	I0224 13:18:01.708500  945004 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0224 13:18:01.708315  945004 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0224 13:18:01.708538  945004 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0224 13:18:01.708571  945004 ssh_runner.go:195] Run: which crictl
	I0224 13:18:01.708579  945004 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0224 13:18:01.708622  945004 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0224 13:18:01.708650  945004 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0224 13:18:01.732902  945004 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0224 13:18:01.837346  945004 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0224 13:18:01.851889  945004 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0224 13:18:01.852016  945004 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0224 13:18:01.852036  945004 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0224 13:18:01.852049  945004 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0224 13:18:01.857538  945004 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0224 13:18:01.890246  945004 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0224 13:18:02.025217  945004 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0224 13:18:02.025272  945004 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0224 13:18:02.039056  945004 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0224 13:18:02.039137  945004 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0224 13:18:02.039187  945004 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0224 13:18:02.039137  945004 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0224 13:18:02.054781  945004 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0224 13:18:02.165346  945004 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20451-887294/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0224 13:18:02.198952  945004 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20451-887294/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0224 13:18:02.222149  945004 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20451-887294/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0224 13:18:02.222231  945004 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0224 13:18:02.222278  945004 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0224 13:18:02.222306  945004 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20451-887294/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0224 13:18:02.222359  945004 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20451-887294/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0224 13:18:02.281708  945004 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20451-887294/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0224 13:18:02.282743  945004 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20451-887294/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0224 13:18:02.517668  945004 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0224 13:18:02.668962  945004 cache_images.go:92] duration metric: took 1.377683538s to LoadCachedImages
	W0224 13:18:02.669086  945004 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20451-887294/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20451-887294/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0224 13:18:02.669107  945004 kubeadm.go:934] updating node { 192.168.50.62 8443 v1.20.0 crio true true} ...
	I0224 13:18:02.669251  945004 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-233759 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.62
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-233759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0224 13:18:02.669362  945004 ssh_runner.go:195] Run: crio config
	I0224 13:18:02.732105  945004 cni.go:84] Creating CNI manager for ""
	I0224 13:18:02.732133  945004 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0224 13:18:02.732146  945004 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0224 13:18:02.732165  945004 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.62 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-233759 NodeName:old-k8s-version-233759 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.62"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.62 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0224 13:18:02.732296  945004 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.62
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-233759"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.62
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.62"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0224 13:18:02.732354  945004 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0224 13:18:02.743754  945004 binaries.go:44] Found k8s binaries, skipping transfer
	I0224 13:18:02.743834  945004 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0224 13:18:02.755843  945004 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0224 13:18:02.778032  945004 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0224 13:18:02.800580  945004 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0224 13:18:02.822614  945004 ssh_runner.go:195] Run: grep 192.168.50.62	control-plane.minikube.internal$ /etc/hosts
	I0224 13:18:02.828657  945004 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.62	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 13:18:02.846365  945004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 13:18:02.999461  945004 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0224 13:18:03.018785  945004 certs.go:68] Setting up /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/old-k8s-version-233759 for IP: 192.168.50.62
	I0224 13:18:03.018825  945004 certs.go:194] generating shared ca certs ...
	I0224 13:18:03.018848  945004 certs.go:226] acquiring lock for ca certs: {Name:mk38777c6b180f63d1816020cff79a01106ddf33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 13:18:03.019045  945004 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20451-887294/.minikube/ca.key
	I0224 13:18:03.019103  945004 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20451-887294/.minikube/proxy-client-ca.key
	I0224 13:18:03.019118  945004 certs.go:256] generating profile certs ...
	I0224 13:18:03.019197  945004 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/old-k8s-version-233759/client.key
	I0224 13:18:03.019229  945004 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/old-k8s-version-233759/client.crt with IP's: []
	I0224 13:18:03.201466  945004 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/old-k8s-version-233759/client.crt ...
	I0224 13:18:03.201500  945004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/old-k8s-version-233759/client.crt: {Name:mk001e2a8dc5966322582bc822c20c2332b4560d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 13:18:03.201701  945004 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/old-k8s-version-233759/client.key ...
	I0224 13:18:03.201716  945004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/old-k8s-version-233759/client.key: {Name:mk09415b12227826438fe31f8751fa04619e4abf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 13:18:03.201830  945004 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/old-k8s-version-233759/apiserver.key.e3d4d6a2
	I0224 13:18:03.201849  945004 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/old-k8s-version-233759/apiserver.crt.e3d4d6a2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.62]
	I0224 13:18:03.417858  945004 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/old-k8s-version-233759/apiserver.crt.e3d4d6a2 ...
	I0224 13:18:03.417900  945004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/old-k8s-version-233759/apiserver.crt.e3d4d6a2: {Name:mk0b92438d6251322d9046df3750461465a983d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 13:18:03.418091  945004 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/old-k8s-version-233759/apiserver.key.e3d4d6a2 ...
	I0224 13:18:03.418111  945004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/old-k8s-version-233759/apiserver.key.e3d4d6a2: {Name:mkce48f0fc2c9f8c55dc36f706bf97ce392f169c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 13:18:03.418220  945004 certs.go:381] copying /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/old-k8s-version-233759/apiserver.crt.e3d4d6a2 -> /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/old-k8s-version-233759/apiserver.crt
	I0224 13:18:03.418329  945004 certs.go:385] copying /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/old-k8s-version-233759/apiserver.key.e3d4d6a2 -> /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/old-k8s-version-233759/apiserver.key
	I0224 13:18:03.418414  945004 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/old-k8s-version-233759/proxy-client.key
	I0224 13:18:03.418434  945004 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/old-k8s-version-233759/proxy-client.crt with IP's: []
	I0224 13:18:03.562529  945004 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/old-k8s-version-233759/proxy-client.crt ...
	I0224 13:18:03.562582  945004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/old-k8s-version-233759/proxy-client.crt: {Name:mka58cd88f5015ce0a03c89f2df585658f84c842 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 13:18:03.562819  945004 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/old-k8s-version-233759/proxy-client.key ...
	I0224 13:18:03.562847  945004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/old-k8s-version-233759/proxy-client.key: {Name:mk3e03c21de40f3db90eb877880fc296422fa9bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 13:18:03.563140  945004 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/894564.pem (1338 bytes)
	W0224 13:18:03.563210  945004 certs.go:480] ignoring /home/jenkins/minikube-integration/20451-887294/.minikube/certs/894564_empty.pem, impossibly tiny 0 bytes
	I0224 13:18:03.563226  945004 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca-key.pem (1679 bytes)
	I0224 13:18:03.563259  945004 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem (1082 bytes)
	I0224 13:18:03.563293  945004 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/cert.pem (1123 bytes)
	I0224 13:18:03.563324  945004 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/key.pem (1679 bytes)
	I0224 13:18:03.563385  945004 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/files/etc/ssl/certs/8945642.pem (1708 bytes)
	I0224 13:18:03.564234  945004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0224 13:18:03.596881  945004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0224 13:18:03.626821  945004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0224 13:18:03.658392  945004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0224 13:18:03.692597  945004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/old-k8s-version-233759/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0224 13:18:03.723788  945004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/old-k8s-version-233759/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0224 13:18:03.756287  945004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/old-k8s-version-233759/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0224 13:18:03.789845  945004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/old-k8s-version-233759/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0224 13:18:03.824019  945004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/files/etc/ssl/certs/8945642.pem --> /usr/share/ca-certificates/8945642.pem (1708 bytes)
	I0224 13:18:03.875546  945004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0224 13:18:03.912517  945004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/certs/894564.pem --> /usr/share/ca-certificates/894564.pem (1338 bytes)
	I0224 13:18:03.948178  945004 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0224 13:18:03.978582  945004 ssh_runner.go:195] Run: openssl version
	I0224 13:18:03.985656  945004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8945642.pem && ln -fs /usr/share/ca-certificates/8945642.pem /etc/ssl/certs/8945642.pem"
	I0224 13:18:04.001442  945004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8945642.pem
	I0224 13:18:04.008540  945004 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 24 12:09 /usr/share/ca-certificates/8945642.pem
	I0224 13:18:04.008618  945004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8945642.pem
	I0224 13:18:04.016827  945004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8945642.pem /etc/ssl/certs/3ec20f2e.0"
	I0224 13:18:04.033745  945004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0224 13:18:04.053653  945004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0224 13:18:04.061633  945004 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 24 12:01 /usr/share/ca-certificates/minikubeCA.pem
	I0224 13:18:04.061712  945004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0224 13:18:04.075375  945004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0224 13:18:04.099930  945004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/894564.pem && ln -fs /usr/share/ca-certificates/894564.pem /etc/ssl/certs/894564.pem"
	I0224 13:18:04.119493  945004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/894564.pem
	I0224 13:18:04.126899  945004 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 24 12:09 /usr/share/ca-certificates/894564.pem
	I0224 13:18:04.126975  945004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/894564.pem
	I0224 13:18:04.138064  945004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/894564.pem /etc/ssl/certs/51391683.0"
	I0224 13:18:04.154183  945004 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0224 13:18:04.161423  945004 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0224 13:18:04.161504  945004 kubeadm.go:392] StartCluster: {Name:old-k8s-version-233759 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-ver
sion-233759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.62 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:d
ocker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0224 13:18:04.161620  945004 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0224 13:18:04.161676  945004 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0224 13:18:04.214397  945004 cri.go:89] found id: ""
	I0224 13:18:04.214485  945004 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0224 13:18:04.227439  945004 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0224 13:18:04.240814  945004 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0224 13:18:04.254486  945004 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0224 13:18:04.254517  945004 kubeadm.go:157] found existing configuration files:
	
	I0224 13:18:04.254583  945004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0224 13:18:04.266397  945004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0224 13:18:04.266529  945004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0224 13:18:04.280404  945004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0224 13:18:04.293362  945004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0224 13:18:04.293453  945004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0224 13:18:04.309134  945004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0224 13:18:04.323939  945004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0224 13:18:04.324036  945004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0224 13:18:04.339200  945004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0224 13:18:04.353873  945004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0224 13:18:04.353947  945004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0224 13:18:04.369058  945004 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0224 13:18:04.526964  945004 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0224 13:18:04.527076  945004 kubeadm.go:310] [preflight] Running pre-flight checks
	I0224 13:18:04.705688  945004 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0224 13:18:04.705830  945004 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0224 13:18:04.705935  945004 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0224 13:18:04.964087  945004 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0224 13:18:04.967415  945004 out.go:235]   - Generating certificates and keys ...
	I0224 13:18:04.967551  945004 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0224 13:18:04.967642  945004 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0224 13:18:05.162198  945004 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0224 13:18:05.288582  945004 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0224 13:18:05.717764  945004 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0224 13:18:05.854692  945004 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0224 13:18:06.119230  945004 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0224 13:18:06.124513  945004 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-233759] and IPs [192.168.50.62 127.0.0.1 ::1]
	I0224 13:18:06.229995  945004 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0224 13:18:06.230483  945004 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-233759] and IPs [192.168.50.62 127.0.0.1 ::1]
	I0224 13:18:06.354283  945004 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0224 13:18:06.492995  945004 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0224 13:18:06.656842  945004 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0224 13:18:06.657157  945004 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0224 13:18:06.948443  945004 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0224 13:18:07.278090  945004 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0224 13:18:07.697152  945004 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0224 13:18:07.818115  945004 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0224 13:18:07.847194  945004 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0224 13:18:07.849008  945004 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0224 13:18:07.849141  945004 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0224 13:18:08.006408  945004 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0224 13:18:08.008474  945004 out.go:235]   - Booting up control plane ...
	I0224 13:18:08.008648  945004 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0224 13:18:08.022644  945004 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0224 13:18:08.023842  945004 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0224 13:18:08.024631  945004 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0224 13:18:08.031849  945004 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0224 13:18:48.028953  945004 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0224 13:18:48.030040  945004 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:18:48.030273  945004 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:18:53.031170  945004 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:18:53.031407  945004 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:19:03.032512  945004 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:19:03.032791  945004 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:19:23.033943  945004 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:19:23.034223  945004 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:20:03.034532  945004 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:20:03.034921  945004 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:20:03.034938  945004 kubeadm.go:310] 
	I0224 13:20:03.034991  945004 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0224 13:20:03.035047  945004 kubeadm.go:310] 		timed out waiting for the condition
	I0224 13:20:03.035063  945004 kubeadm.go:310] 
	I0224 13:20:03.035114  945004 kubeadm.go:310] 	This error is likely caused by:
	I0224 13:20:03.035155  945004 kubeadm.go:310] 		- The kubelet is not running
	I0224 13:20:03.035292  945004 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0224 13:20:03.035300  945004 kubeadm.go:310] 
	I0224 13:20:03.035386  945004 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0224 13:20:03.035412  945004 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0224 13:20:03.035440  945004 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0224 13:20:03.035444  945004 kubeadm.go:310] 
	I0224 13:20:03.035531  945004 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0224 13:20:03.035641  945004 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0224 13:20:03.035650  945004 kubeadm.go:310] 
	I0224 13:20:03.035745  945004 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0224 13:20:03.035812  945004 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0224 13:20:03.035869  945004 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0224 13:20:03.035923  945004 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0224 13:20:03.035928  945004 kubeadm.go:310] 
	I0224 13:20:03.037286  945004 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0224 13:20:03.037423  945004 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0224 13:20:03.037573  945004 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0224 13:20:03.037760  945004 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-233759] and IPs [192.168.50.62 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-233759] and IPs [192.168.50.62 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-233759] and IPs [192.168.50.62 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-233759] and IPs [192.168.50.62 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0224 13:20:03.037809  945004 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0224 13:20:05.550944  945004 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.51308936s)
	I0224 13:20:05.551032  945004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 13:20:05.573897  945004 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0224 13:20:05.589789  945004 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0224 13:20:05.589816  945004 kubeadm.go:157] found existing configuration files:
	
	I0224 13:20:05.589877  945004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0224 13:20:05.603814  945004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0224 13:20:05.603874  945004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0224 13:20:05.616321  945004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0224 13:20:05.628882  945004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0224 13:20:05.628960  945004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0224 13:20:05.640030  945004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0224 13:20:05.651228  945004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0224 13:20:05.651330  945004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0224 13:20:05.663377  945004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0224 13:20:05.674241  945004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0224 13:20:05.674314  945004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0224 13:20:05.685918  945004 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0224 13:20:05.924941  945004 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0224 13:22:02.153073  945004 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0224 13:22:02.153232  945004 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0224 13:22:02.154991  945004 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0224 13:22:02.155058  945004 kubeadm.go:310] [preflight] Running pre-flight checks
	I0224 13:22:02.155147  945004 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0224 13:22:02.155279  945004 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0224 13:22:02.155372  945004 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0224 13:22:02.155495  945004 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0224 13:22:02.327389  945004 out.go:235]   - Generating certificates and keys ...
	I0224 13:22:02.327536  945004 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0224 13:22:02.327589  945004 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0224 13:22:02.327686  945004 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0224 13:22:02.327747  945004 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0224 13:22:02.327810  945004 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0224 13:22:02.327857  945004 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0224 13:22:02.327933  945004 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0224 13:22:02.328000  945004 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0224 13:22:02.328064  945004 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0224 13:22:02.328194  945004 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0224 13:22:02.328269  945004 kubeadm.go:310] [certs] Using the existing "sa" key
	I0224 13:22:02.328356  945004 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0224 13:22:02.328452  945004 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0224 13:22:02.328534  945004 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0224 13:22:02.328633  945004 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0224 13:22:02.328721  945004 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0224 13:22:02.328843  945004 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0224 13:22:02.328913  945004 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0224 13:22:02.328946  945004 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0224 13:22:02.329013  945004 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0224 13:22:02.330393  945004 out.go:235]   - Booting up control plane ...
	I0224 13:22:02.330531  945004 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0224 13:22:02.330663  945004 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0224 13:22:02.330772  945004 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0224 13:22:02.330888  945004 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0224 13:22:02.331113  945004 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0224 13:22:02.331193  945004 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0224 13:22:02.331291  945004 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:22:02.331515  945004 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:22:02.331581  945004 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:22:02.331828  945004 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:22:02.331943  945004 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:22:02.332181  945004 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:22:02.332283  945004 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:22:02.332537  945004 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:22:02.332638  945004 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:22:02.332845  945004 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:22:02.332856  945004 kubeadm.go:310] 
	I0224 13:22:02.332910  945004 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0224 13:22:02.332967  945004 kubeadm.go:310] 		timed out waiting for the condition
	I0224 13:22:02.332976  945004 kubeadm.go:310] 
	I0224 13:22:02.333024  945004 kubeadm.go:310] 	This error is likely caused by:
	I0224 13:22:02.333075  945004 kubeadm.go:310] 		- The kubelet is not running
	I0224 13:22:02.333170  945004 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0224 13:22:02.333180  945004 kubeadm.go:310] 
	I0224 13:22:02.333318  945004 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0224 13:22:02.333367  945004 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0224 13:22:02.333445  945004 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0224 13:22:02.333465  945004 kubeadm.go:310] 
	I0224 13:22:02.333603  945004 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0224 13:22:02.333707  945004 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0224 13:22:02.333718  945004 kubeadm.go:310] 
	I0224 13:22:02.333862  945004 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0224 13:22:02.333954  945004 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0224 13:22:02.334074  945004 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0224 13:22:02.334194  945004 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0224 13:22:02.334230  945004 kubeadm.go:310] 
	I0224 13:22:02.334286  945004 kubeadm.go:394] duration metric: took 3m58.172788032s to StartCluster
	I0224 13:22:02.334343  945004 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:22:02.334421  945004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:22:02.388338  945004 cri.go:89] found id: ""
	I0224 13:22:02.388375  945004 logs.go:282] 0 containers: []
	W0224 13:22:02.388388  945004 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:22:02.388398  945004 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:22:02.388481  945004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:22:02.436717  945004 cri.go:89] found id: ""
	I0224 13:22:02.436766  945004 logs.go:282] 0 containers: []
	W0224 13:22:02.436779  945004 logs.go:284] No container was found matching "etcd"
	I0224 13:22:02.436787  945004 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:22:02.436862  945004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:22:02.478061  945004 cri.go:89] found id: ""
	I0224 13:22:02.478097  945004 logs.go:282] 0 containers: []
	W0224 13:22:02.478110  945004 logs.go:284] No container was found matching "coredns"
	I0224 13:22:02.478119  945004 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:22:02.478190  945004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:22:02.518871  945004 cri.go:89] found id: ""
	I0224 13:22:02.518907  945004 logs.go:282] 0 containers: []
	W0224 13:22:02.518919  945004 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:22:02.518926  945004 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:22:02.518997  945004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:22:02.566684  945004 cri.go:89] found id: ""
	I0224 13:22:02.566717  945004 logs.go:282] 0 containers: []
	W0224 13:22:02.566730  945004 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:22:02.566738  945004 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:22:02.566807  945004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:22:02.611361  945004 cri.go:89] found id: ""
	I0224 13:22:02.611401  945004 logs.go:282] 0 containers: []
	W0224 13:22:02.611411  945004 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:22:02.611418  945004 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:22:02.611486  945004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:22:02.666829  945004 cri.go:89] found id: ""
	I0224 13:22:02.666864  945004 logs.go:282] 0 containers: []
	W0224 13:22:02.666878  945004 logs.go:284] No container was found matching "kindnet"
	I0224 13:22:02.666892  945004 logs.go:123] Gathering logs for kubelet ...
	I0224 13:22:02.666909  945004 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:22:02.727119  945004 logs.go:123] Gathering logs for dmesg ...
	I0224 13:22:02.727164  945004 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:22:02.745040  945004 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:22:02.745083  945004 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:22:02.892216  945004 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:22:02.892249  945004 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:22:02.892266  945004 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:22:03.016379  945004 logs.go:123] Gathering logs for container status ...
	I0224 13:22:03.016427  945004 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0224 13:22:03.073254  945004 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0224 13:22:03.073351  945004 out.go:270] * 
	* 
	W0224 13:22:03.073470  945004 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0224 13:22:03.073521  945004 out.go:270] * 
	* 
	W0224 13:22:03.074812  945004 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0224 13:22:03.078706  945004 out.go:201] 
	W0224 13:22:03.080145  945004 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0224 13:22:03.080215  945004 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0224 13:22:03.080244  945004 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0224 13:22:03.081936  945004 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-233759 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-233759 -n old-k8s-version-233759
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-233759 -n old-k8s-version-233759: exit status 6 (247.613285ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0224 13:22:03.382163  952395 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-233759" does not appear in /home/jenkins/minikube-integration/20451-887294/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-233759" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (287.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-233759 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context old-k8s-version-233759 create -f testdata/busybox.yaml: exit status 1 (49.902501ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-233759" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context old-k8s-version-233759 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-233759 -n old-k8s-version-233759
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-233759 -n old-k8s-version-233759: exit status 6 (250.773242ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0224 13:22:03.684521  952433 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-233759" does not appear in /home/jenkins/minikube-integration/20451-887294/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-233759" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-233759 -n old-k8s-version-233759
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-233759 -n old-k8s-version-233759: exit status 6 (268.46127ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0224 13:22:03.952894  952463 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-233759" does not appear in /home/jenkins/minikube-integration/20451-887294/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-233759" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (108.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-233759 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0224 13:22:07.546377  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/custom-flannel-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:22:26.007810  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/calico-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:22:28.028681  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/custom-flannel-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:22:30.948910  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/enable-default-cni-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:22:30.955370  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/enable-default-cni-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:22:30.966817  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/enable-default-cni-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:22:30.988425  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/enable-default-cni-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:22:31.029937  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/enable-default-cni-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:22:31.111435  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/enable-default-cni-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:22:31.272964  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/enable-default-cni-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:22:31.594393  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/enable-default-cni-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:22:32.236402  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/enable-default-cni-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:22:33.518319  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/enable-default-cni-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:22:36.079767  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/enable-default-cni-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:22:41.201897  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/enable-default-cni-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:22:43.833111  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/flannel-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:22:43.839668  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/flannel-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:22:43.851100  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/flannel-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:22:43.872670  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/flannel-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:22:43.914278  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/flannel-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:22:43.995897  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/flannel-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:22:44.157641  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/flannel-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:22:44.479374  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/flannel-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:22:45.121641  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/flannel-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:22:46.403532  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/flannel-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:22:48.965133  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/flannel-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:22:51.443917  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/enable-default-cni-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:22:54.087160  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/flannel-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:23:04.329291  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/flannel-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:23:07.584078  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kindnet-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:23:08.990213  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/custom-flannel-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:23:11.925875  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/enable-default-cni-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:23:24.811001  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/flannel-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:23:28.218450  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/bridge-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:23:28.224887  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/bridge-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:23:28.236420  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/bridge-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:23:28.257990  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/bridge-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:23:28.299529  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/bridge-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:23:28.381776  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/bridge-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:23:28.543406  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/bridge-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:23:28.865219  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/bridge-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:23:29.507332  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/bridge-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:23:30.788688  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/bridge-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:23:33.350546  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/bridge-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:23:38.472470  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/bridge-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:23:47.929892  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/calico-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:23:48.714400  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/bridge-799329/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-233759 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m47.779307195s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-233759 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-233759 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-233759 describe deploy/metrics-server -n kube-system: exit status 1 (47.058459ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-233759" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-233759 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-233759 -n old-k8s-version-233759
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-233759 -n old-k8s-version-233759: exit status 6 (244.774491ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0224 13:23:52.023472  953150 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-233759" does not appear in /home/jenkins/minikube-integration/20451-887294/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-233759" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (108.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (510.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-233759 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0224 13:23:54.809282  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/auto-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:24:05.772545  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/flannel-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:24:09.196406  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/bridge-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:24:12.769889  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:24:22.514112  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/auto-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:24:30.912455  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/custom-flannel-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:24:50.158192  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/bridge-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:25:14.808958  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/enable-default-cni-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:25:23.723679  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kindnet-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:25:27.694257  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/flannel-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:25:51.425980  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kindnet-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:26:04.068668  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/calico-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:26:12.080063  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/bridge-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:26:31.771515  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/calico-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:26:46.849639  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/functional-892991/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:26:47.049342  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/custom-flannel-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:27:14.754164  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/custom-flannel-799329/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-233759 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (8m29.096266169s)

                                                
                                                
-- stdout --
	* [old-k8s-version-233759] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20451
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20451-887294/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-887294/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-233759" primary control-plane node in "old-k8s-version-233759" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-233759" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0224 13:23:54.760227  953268 out.go:345] Setting OutFile to fd 1 ...
	I0224 13:23:54.760373  953268 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 13:23:54.760385  953268 out.go:358] Setting ErrFile to fd 2...
	I0224 13:23:54.760392  953268 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 13:23:54.760890  953268 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-887294/.minikube/bin
	I0224 13:23:54.761714  953268 out.go:352] Setting JSON to false
	I0224 13:23:54.762842  953268 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":11176,"bootTime":1740392259,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0224 13:23:54.762948  953268 start.go:139] virtualization: kvm guest
	I0224 13:23:54.766055  953268 out.go:177] * [old-k8s-version-233759] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0224 13:23:54.767605  953268 out.go:177]   - MINIKUBE_LOCATION=20451
	I0224 13:23:54.767603  953268 notify.go:220] Checking for updates...
	I0224 13:23:54.770534  953268 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 13:23:54.771806  953268 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20451-887294/kubeconfig
	I0224 13:23:54.773239  953268 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-887294/.minikube
	I0224 13:23:54.774562  953268 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0224 13:23:54.775842  953268 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0224 13:23:54.777807  953268 config.go:182] Loaded profile config "old-k8s-version-233759": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0224 13:23:54.778525  953268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:23:54.778638  953268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:23:54.795028  953268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33941
	I0224 13:23:54.795484  953268 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:23:54.796100  953268 main.go:141] libmachine: Using API Version  1
	I0224 13:23:54.796130  953268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:23:54.796537  953268 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:23:54.796770  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .DriverName
	I0224 13:23:54.798968  953268 out.go:177] * Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	I0224 13:23:54.800475  953268 driver.go:394] Setting default libvirt URI to qemu:///system
	I0224 13:23:54.800825  953268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:23:54.800875  953268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:23:54.817522  953268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44921
	I0224 13:23:54.818007  953268 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:23:54.818530  953268 main.go:141] libmachine: Using API Version  1
	I0224 13:23:54.818554  953268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:23:54.818949  953268 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:23:54.819197  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .DriverName
	I0224 13:23:54.859682  953268 out.go:177] * Using the kvm2 driver based on existing profile
	I0224 13:23:54.860983  953268 start.go:297] selected driver: kvm2
	I0224 13:23:54.861000  953268 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-233759 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 Clust
erName:old-k8s-version-233759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.62 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:
/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0224 13:23:54.861111  953268 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0224 13:23:54.862020  953268 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 13:23:54.862134  953268 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20451-887294/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0224 13:23:54.879187  953268 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0224 13:23:54.879652  953268 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0224 13:23:54.879694  953268 cni.go:84] Creating CNI manager for ""
	I0224 13:23:54.879754  953268 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0224 13:23:54.879810  953268 start.go:340] cluster config:
	{Name:old-k8s-version-233759 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-233759 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.62 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0224 13:23:54.879933  953268 iso.go:125] acquiring lock: {Name:mk57408cca66a96a13d93cda43cdfac6e61aef3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 13:23:54.882048  953268 out.go:177] * Starting "old-k8s-version-233759" primary control-plane node in "old-k8s-version-233759" cluster
	I0224 13:23:54.883587  953268 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0224 13:23:54.883678  953268 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20451-887294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0224 13:23:54.883694  953268 cache.go:56] Caching tarball of preloaded images
	I0224 13:23:54.883813  953268 preload.go:172] Found /home/jenkins/minikube-integration/20451-887294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0224 13:23:54.883827  953268 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0224 13:23:54.883937  953268 profile.go:143] Saving config to /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/old-k8s-version-233759/config.json ...
	I0224 13:23:54.884149  953268 start.go:360] acquireMachinesLock for old-k8s-version-233759: {Name:mk023761b01bb629a1acd40bc8104cc517b0e15b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0224 13:23:54.884207  953268 start.go:364] duration metric: took 36.461µs to acquireMachinesLock for "old-k8s-version-233759"
	I0224 13:23:54.884224  953268 start.go:96] Skipping create...Using existing machine configuration
	I0224 13:23:54.884230  953268 fix.go:54] fixHost starting: 
	I0224 13:23:54.884507  953268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:23:54.884538  953268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:23:54.902508  953268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35315
	I0224 13:23:54.903102  953268 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:23:54.903648  953268 main.go:141] libmachine: Using API Version  1
	I0224 13:23:54.903673  953268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:23:54.904092  953268 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:23:54.904251  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .DriverName
	I0224 13:23:54.904408  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetState
	I0224 13:23:54.906152  953268 fix.go:112] recreateIfNeeded on old-k8s-version-233759: state=Stopped err=<nil>
	I0224 13:23:54.906196  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .DriverName
	W0224 13:23:54.906380  953268 fix.go:138] unexpected machine state, will restart: <nil>
	I0224 13:23:54.909753  953268 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-233759" ...
	I0224 13:23:54.911492  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .Start
	I0224 13:23:54.911780  953268 main.go:141] libmachine: (old-k8s-version-233759) starting domain...
	I0224 13:23:54.911809  953268 main.go:141] libmachine: (old-k8s-version-233759) ensuring networks are active...
	I0224 13:23:54.912733  953268 main.go:141] libmachine: (old-k8s-version-233759) Ensuring network default is active
	I0224 13:23:54.913141  953268 main.go:141] libmachine: (old-k8s-version-233759) Ensuring network mk-old-k8s-version-233759 is active
	I0224 13:23:54.913684  953268 main.go:141] libmachine: (old-k8s-version-233759) getting domain XML...
	I0224 13:23:54.914573  953268 main.go:141] libmachine: (old-k8s-version-233759) creating domain...
	I0224 13:23:56.225628  953268 main.go:141] libmachine: (old-k8s-version-233759) waiting for IP...
	I0224 13:23:56.226574  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:23:56.227068  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | unable to find current IP address of domain old-k8s-version-233759 in network mk-old-k8s-version-233759
	I0224 13:23:56.227204  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | I0224 13:23:56.227087  953303 retry.go:31] will retry after 234.747619ms: waiting for domain to come up
	I0224 13:23:56.463650  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:23:56.464311  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | unable to find current IP address of domain old-k8s-version-233759 in network mk-old-k8s-version-233759
	I0224 13:23:56.464343  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | I0224 13:23:56.464289  953303 retry.go:31] will retry after 364.4586ms: waiting for domain to come up
	I0224 13:23:56.830923  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:23:56.831695  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | unable to find current IP address of domain old-k8s-version-233759 in network mk-old-k8s-version-233759
	I0224 13:23:56.831730  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | I0224 13:23:56.831638  953303 retry.go:31] will retry after 407.641266ms: waiting for domain to come up
	I0224 13:23:57.241366  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:23:57.242052  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | unable to find current IP address of domain old-k8s-version-233759 in network mk-old-k8s-version-233759
	I0224 13:23:57.242087  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | I0224 13:23:57.242014  953303 retry.go:31] will retry after 589.491575ms: waiting for domain to come up
	I0224 13:23:57.833002  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:23:57.833694  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | unable to find current IP address of domain old-k8s-version-233759 in network mk-old-k8s-version-233759
	I0224 13:23:57.833726  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | I0224 13:23:57.833646  953303 retry.go:31] will retry after 637.895961ms: waiting for domain to come up
	I0224 13:23:58.473509  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:23:58.474176  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | unable to find current IP address of domain old-k8s-version-233759 in network mk-old-k8s-version-233759
	I0224 13:23:58.474203  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | I0224 13:23:58.474130  953303 retry.go:31] will retry after 768.23373ms: waiting for domain to come up
	I0224 13:23:59.243729  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:23:59.244256  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | unable to find current IP address of domain old-k8s-version-233759 in network mk-old-k8s-version-233759
	I0224 13:23:59.244287  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | I0224 13:23:59.244198  953303 retry.go:31] will retry after 981.339732ms: waiting for domain to come up
	I0224 13:24:00.227725  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:24:00.228204  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | unable to find current IP address of domain old-k8s-version-233759 in network mk-old-k8s-version-233759
	I0224 13:24:00.228256  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | I0224 13:24:00.228194  953303 retry.go:31] will retry after 1.245469929s: waiting for domain to come up
	I0224 13:24:01.475639  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:24:01.476267  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | unable to find current IP address of domain old-k8s-version-233759 in network mk-old-k8s-version-233759
	I0224 13:24:01.476301  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | I0224 13:24:01.476212  953303 retry.go:31] will retry after 1.650347258s: waiting for domain to come up
	I0224 13:24:03.129089  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:24:03.129722  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | unable to find current IP address of domain old-k8s-version-233759 in network mk-old-k8s-version-233759
	I0224 13:24:03.129753  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | I0224 13:24:03.129650  953303 retry.go:31] will retry after 1.879757018s: waiting for domain to come up
	I0224 13:24:05.011208  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:24:05.011868  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | unable to find current IP address of domain old-k8s-version-233759 in network mk-old-k8s-version-233759
	I0224 13:24:05.011905  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | I0224 13:24:05.011821  953303 retry.go:31] will retry after 2.344638481s: waiting for domain to come up
	I0224 13:24:07.359468  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:24:07.360064  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | unable to find current IP address of domain old-k8s-version-233759 in network mk-old-k8s-version-233759
	I0224 13:24:07.360093  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | I0224 13:24:07.360032  953303 retry.go:31] will retry after 3.238963089s: waiting for domain to come up
	I0224 13:24:10.601112  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:24:10.601796  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | unable to find current IP address of domain old-k8s-version-233759 in network mk-old-k8s-version-233759
	I0224 13:24:10.601831  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | I0224 13:24:10.601725  953303 retry.go:31] will retry after 3.163349128s: waiting for domain to come up
	I0224 13:24:13.766531  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:24:13.767158  953268 main.go:141] libmachine: (old-k8s-version-233759) found domain IP: 192.168.50.62
	I0224 13:24:13.767188  953268 main.go:141] libmachine: (old-k8s-version-233759) reserving static IP address...
	I0224 13:24:13.767222  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has current primary IP address 192.168.50.62 and MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:24:13.767673  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | found host DHCP lease matching {name: "old-k8s-version-233759", mac: "52:54:00:cd:a9:f6", ip: "192.168.50.62"} in network mk-old-k8s-version-233759: {Iface:virbr2 ExpiryTime:2025-02-24 14:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:a9:f6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:old-k8s-version-233759 Clientid:01:52:54:00:cd:a9:f6}
	I0224 13:24:13.767711  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | skip adding static IP to network mk-old-k8s-version-233759 - found existing host DHCP lease matching {name: "old-k8s-version-233759", mac: "52:54:00:cd:a9:f6", ip: "192.168.50.62"}
	I0224 13:24:13.767726  953268 main.go:141] libmachine: (old-k8s-version-233759) reserved static IP address 192.168.50.62 for domain old-k8s-version-233759
	I0224 13:24:13.767740  953268 main.go:141] libmachine: (old-k8s-version-233759) waiting for SSH...
	I0224 13:24:13.767756  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | Getting to WaitForSSH function...
	I0224 13:24:13.770158  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:24:13.770564  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:a9:f6", ip: ""} in network mk-old-k8s-version-233759: {Iface:virbr2 ExpiryTime:2025-02-24 14:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:a9:f6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:old-k8s-version-233759 Clientid:01:52:54:00:cd:a9:f6}
	I0224 13:24:13.770590  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined IP address 192.168.50.62 and MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:24:13.770729  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | Using SSH client type: external
	I0224 13:24:13.770761  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | Using SSH private key: /home/jenkins/minikube-integration/20451-887294/.minikube/machines/old-k8s-version-233759/id_rsa (-rw-------)
	I0224 13:24:13.770816  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.62 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20451-887294/.minikube/machines/old-k8s-version-233759/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0224 13:24:13.770833  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | About to run SSH command:
	I0224 13:24:13.770850  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | exit 0
	I0224 13:24:13.893994  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | SSH cmd err, output: <nil>: 
	I0224 13:24:13.894363  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetConfigRaw
	I0224 13:24:13.895254  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetIP
	I0224 13:24:13.898847  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:24:13.899297  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:a9:f6", ip: ""} in network mk-old-k8s-version-233759: {Iface:virbr2 ExpiryTime:2025-02-24 14:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:a9:f6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:old-k8s-version-233759 Clientid:01:52:54:00:cd:a9:f6}
	I0224 13:24:13.899334  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined IP address 192.168.50.62 and MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:24:13.899654  953268 profile.go:143] Saving config to /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/old-k8s-version-233759/config.json ...
	I0224 13:24:13.899947  953268 machine.go:93] provisionDockerMachine start ...
	I0224 13:24:13.899968  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .DriverName
	I0224 13:24:13.900220  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHHostname
	I0224 13:24:13.902864  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:24:13.903284  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:a9:f6", ip: ""} in network mk-old-k8s-version-233759: {Iface:virbr2 ExpiryTime:2025-02-24 14:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:a9:f6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:old-k8s-version-233759 Clientid:01:52:54:00:cd:a9:f6}
	I0224 13:24:13.903373  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined IP address 192.168.50.62 and MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:24:13.903572  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHPort
	I0224 13:24:13.903819  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHKeyPath
	I0224 13:24:13.903984  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHKeyPath
	I0224 13:24:13.904283  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHUsername
	I0224 13:24:13.904523  953268 main.go:141] libmachine: Using SSH client type: native
	I0224 13:24:13.904799  953268 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I0224 13:24:13.904812  953268 main.go:141] libmachine: About to run SSH command:
	hostname
	I0224 13:24:14.006701  953268 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0224 13:24:14.006740  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetMachineName
	I0224 13:24:14.007028  953268 buildroot.go:166] provisioning hostname "old-k8s-version-233759"
	I0224 13:24:14.007061  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetMachineName
	I0224 13:24:14.007234  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHHostname
	I0224 13:24:14.010732  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:24:14.011225  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:a9:f6", ip: ""} in network mk-old-k8s-version-233759: {Iface:virbr2 ExpiryTime:2025-02-24 14:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:a9:f6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:old-k8s-version-233759 Clientid:01:52:54:00:cd:a9:f6}
	I0224 13:24:14.011277  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined IP address 192.168.50.62 and MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:24:14.011482  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHPort
	I0224 13:24:14.011720  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHKeyPath
	I0224 13:24:14.011922  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHKeyPath
	I0224 13:24:14.012075  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHUsername
	I0224 13:24:14.012252  953268 main.go:141] libmachine: Using SSH client type: native
	I0224 13:24:14.012523  953268 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I0224 13:24:14.012544  953268 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-233759 && echo "old-k8s-version-233759" | sudo tee /etc/hostname
	I0224 13:24:14.129642  953268 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-233759
	
	I0224 13:24:14.129695  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHHostname
	I0224 13:24:14.133257  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:24:14.133674  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:a9:f6", ip: ""} in network mk-old-k8s-version-233759: {Iface:virbr2 ExpiryTime:2025-02-24 14:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:a9:f6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:old-k8s-version-233759 Clientid:01:52:54:00:cd:a9:f6}
	I0224 13:24:14.133727  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined IP address 192.168.50.62 and MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:24:14.133864  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHPort
	I0224 13:24:14.134101  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHKeyPath
	I0224 13:24:14.134319  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHKeyPath
	I0224 13:24:14.134501  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHUsername
	I0224 13:24:14.134681  953268 main.go:141] libmachine: Using SSH client type: native
	I0224 13:24:14.134955  953268 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I0224 13:24:14.134984  953268 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-233759' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-233759/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-233759' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0224 13:24:14.249362  953268 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0224 13:24:14.249401  953268 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20451-887294/.minikube CaCertPath:/home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20451-887294/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20451-887294/.minikube}
	I0224 13:24:14.249450  953268 buildroot.go:174] setting up certificates
	I0224 13:24:14.249476  953268 provision.go:84] configureAuth start
	I0224 13:24:14.249493  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetMachineName
	I0224 13:24:14.249830  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetIP
	I0224 13:24:14.252548  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:24:14.252870  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:a9:f6", ip: ""} in network mk-old-k8s-version-233759: {Iface:virbr2 ExpiryTime:2025-02-24 14:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:a9:f6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:old-k8s-version-233759 Clientid:01:52:54:00:cd:a9:f6}
	I0224 13:24:14.252919  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined IP address 192.168.50.62 and MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:24:14.253054  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHHostname
	I0224 13:24:14.255728  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:24:14.256098  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:a9:f6", ip: ""} in network mk-old-k8s-version-233759: {Iface:virbr2 ExpiryTime:2025-02-24 14:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:a9:f6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:old-k8s-version-233759 Clientid:01:52:54:00:cd:a9:f6}
	I0224 13:24:14.256145  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined IP address 192.168.50.62 and MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:24:14.256292  953268 provision.go:143] copyHostCerts
	I0224 13:24:14.256386  953268 exec_runner.go:144] found /home/jenkins/minikube-integration/20451-887294/.minikube/ca.pem, removing ...
	I0224 13:24:14.256401  953268 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20451-887294/.minikube/ca.pem
	I0224 13:24:14.256478  953268 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20451-887294/.minikube/ca.pem (1082 bytes)
	I0224 13:24:14.256609  953268 exec_runner.go:144] found /home/jenkins/minikube-integration/20451-887294/.minikube/cert.pem, removing ...
	I0224 13:24:14.256621  953268 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20451-887294/.minikube/cert.pem
	I0224 13:24:14.256662  953268 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20451-887294/.minikube/cert.pem (1123 bytes)
	I0224 13:24:14.256740  953268 exec_runner.go:144] found /home/jenkins/minikube-integration/20451-887294/.minikube/key.pem, removing ...
	I0224 13:24:14.256758  953268 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20451-887294/.minikube/key.pem
	I0224 13:24:14.256793  953268 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20451-887294/.minikube/key.pem (1679 bytes)
	I0224 13:24:14.256883  953268 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20451-887294/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-233759 san=[127.0.0.1 192.168.50.62 localhost minikube old-k8s-version-233759]
	I0224 13:24:14.428174  953268 provision.go:177] copyRemoteCerts
	I0224 13:24:14.428251  953268 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0224 13:24:14.428305  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHHostname
	I0224 13:24:14.431639  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:24:14.432095  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:a9:f6", ip: ""} in network mk-old-k8s-version-233759: {Iface:virbr2 ExpiryTime:2025-02-24 14:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:a9:f6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:old-k8s-version-233759 Clientid:01:52:54:00:cd:a9:f6}
	I0224 13:24:14.432131  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined IP address 192.168.50.62 and MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:24:14.432357  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHPort
	I0224 13:24:14.432603  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHKeyPath
	I0224 13:24:14.432807  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHUsername
	I0224 13:24:14.433000  953268 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/old-k8s-version-233759/id_rsa Username:docker}
	I0224 13:24:14.513088  953268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0224 13:24:14.542634  953268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0224 13:24:14.573920  953268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0224 13:24:14.602102  953268 provision.go:87] duration metric: took 352.60641ms to configureAuth
	I0224 13:24:14.602147  953268 buildroot.go:189] setting minikube options for container-runtime
	I0224 13:24:14.602370  953268 config.go:182] Loaded profile config "old-k8s-version-233759": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0224 13:24:14.602489  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHHostname
	I0224 13:24:14.606271  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:24:14.606856  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:a9:f6", ip: ""} in network mk-old-k8s-version-233759: {Iface:virbr2 ExpiryTime:2025-02-24 14:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:a9:f6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:old-k8s-version-233759 Clientid:01:52:54:00:cd:a9:f6}
	I0224 13:24:14.606884  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined IP address 192.168.50.62 and MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:24:14.607199  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHPort
	I0224 13:24:14.607460  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHKeyPath
	I0224 13:24:14.607702  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHKeyPath
	I0224 13:24:14.608088  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHUsername
	I0224 13:24:14.608431  953268 main.go:141] libmachine: Using SSH client type: native
	I0224 13:24:14.608732  953268 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I0224 13:24:14.608764  953268 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0224 13:24:14.860716  953268 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0224 13:24:14.860747  953268 machine.go:96] duration metric: took 960.785637ms to provisionDockerMachine
	I0224 13:24:14.860764  953268 start.go:293] postStartSetup for "old-k8s-version-233759" (driver="kvm2")
	I0224 13:24:14.860778  953268 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0224 13:24:14.860804  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .DriverName
	I0224 13:24:14.861220  953268 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0224 13:24:14.861265  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHHostname
	I0224 13:24:14.864446  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:24:14.864759  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:a9:f6", ip: ""} in network mk-old-k8s-version-233759: {Iface:virbr2 ExpiryTime:2025-02-24 14:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:a9:f6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:old-k8s-version-233759 Clientid:01:52:54:00:cd:a9:f6}
	I0224 13:24:14.864783  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined IP address 192.168.50.62 and MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:24:14.864969  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHPort
	I0224 13:24:14.865198  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHKeyPath
	I0224 13:24:14.865398  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHUsername
	I0224 13:24:14.865545  953268 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/old-k8s-version-233759/id_rsa Username:docker}
	I0224 13:24:14.950341  953268 ssh_runner.go:195] Run: cat /etc/os-release
	I0224 13:24:14.955578  953268 info.go:137] Remote host: Buildroot 2023.02.9
	I0224 13:24:14.955620  953268 filesync.go:126] Scanning /home/jenkins/minikube-integration/20451-887294/.minikube/addons for local assets ...
	I0224 13:24:14.955693  953268 filesync.go:126] Scanning /home/jenkins/minikube-integration/20451-887294/.minikube/files for local assets ...
	I0224 13:24:14.955792  953268 filesync.go:149] local asset: /home/jenkins/minikube-integration/20451-887294/.minikube/files/etc/ssl/certs/8945642.pem -> 8945642.pem in /etc/ssl/certs
	I0224 13:24:14.955934  953268 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0224 13:24:14.967555  953268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/files/etc/ssl/certs/8945642.pem --> /etc/ssl/certs/8945642.pem (1708 bytes)
	I0224 13:24:14.996767  953268 start.go:296] duration metric: took 135.986681ms for postStartSetup
	I0224 13:24:14.996820  953268 fix.go:56] duration metric: took 20.112588628s for fixHost
	I0224 13:24:14.996851  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHHostname
	I0224 13:24:15.000012  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:24:15.000621  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:a9:f6", ip: ""} in network mk-old-k8s-version-233759: {Iface:virbr2 ExpiryTime:2025-02-24 14:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:a9:f6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:old-k8s-version-233759 Clientid:01:52:54:00:cd:a9:f6}
	I0224 13:24:15.000657  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined IP address 192.168.50.62 and MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:24:15.000869  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHPort
	I0224 13:24:15.001287  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHKeyPath
	I0224 13:24:15.001562  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHKeyPath
	I0224 13:24:15.001783  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHUsername
	I0224 13:24:15.002010  953268 main.go:141] libmachine: Using SSH client type: native
	I0224 13:24:15.002283  953268 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I0224 13:24:15.002300  953268 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0224 13:24:15.111060  953268 main.go:141] libmachine: SSH cmd err, output: <nil>: 1740403455.066209312
	
	I0224 13:24:15.111092  953268 fix.go:216] guest clock: 1740403455.066209312
	I0224 13:24:15.111103  953268 fix.go:229] Guest: 2025-02-24 13:24:15.066209312 +0000 UTC Remote: 2025-02-24 13:24:14.996826341 +0000 UTC m=+20.281565561 (delta=69.382971ms)
	I0224 13:24:15.111161  953268 fix.go:200] guest clock delta is within tolerance: 69.382971ms
	I0224 13:24:15.111167  953268 start.go:83] releasing machines lock for "old-k8s-version-233759", held for 20.226949103s
	I0224 13:24:15.111192  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .DriverName
	I0224 13:24:15.111506  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetIP
	I0224 13:24:15.114475  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:24:15.114867  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:a9:f6", ip: ""} in network mk-old-k8s-version-233759: {Iface:virbr2 ExpiryTime:2025-02-24 14:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:a9:f6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:old-k8s-version-233759 Clientid:01:52:54:00:cd:a9:f6}
	I0224 13:24:15.114903  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined IP address 192.168.50.62 and MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:24:15.115048  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .DriverName
	I0224 13:24:15.115656  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .DriverName
	I0224 13:24:15.115828  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .DriverName
	I0224 13:24:15.115984  953268 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0224 13:24:15.116053  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHHostname
	I0224 13:24:15.116064  953268 ssh_runner.go:195] Run: cat /version.json
	I0224 13:24:15.116085  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHHostname
	I0224 13:24:15.119414  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:24:15.119544  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:24:15.119799  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:a9:f6", ip: ""} in network mk-old-k8s-version-233759: {Iface:virbr2 ExpiryTime:2025-02-24 14:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:a9:f6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:old-k8s-version-233759 Clientid:01:52:54:00:cd:a9:f6}
	I0224 13:24:15.119829  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined IP address 192.168.50.62 and MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:24:15.119950  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:a9:f6", ip: ""} in network mk-old-k8s-version-233759: {Iface:virbr2 ExpiryTime:2025-02-24 14:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:a9:f6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:old-k8s-version-233759 Clientid:01:52:54:00:cd:a9:f6}
	I0224 13:24:15.119985  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined IP address 192.168.50.62 and MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:24:15.119988  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHPort
	I0224 13:24:15.120179  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHPort
	I0224 13:24:15.120264  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHKeyPath
	I0224 13:24:15.120375  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHKeyPath
	I0224 13:24:15.120460  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHUsername
	I0224 13:24:15.120534  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetSSHUsername
	I0224 13:24:15.120641  953268 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/old-k8s-version-233759/id_rsa Username:docker}
	I0224 13:24:15.120695  953268 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/old-k8s-version-233759/id_rsa Username:docker}
	I0224 13:24:15.199641  953268 ssh_runner.go:195] Run: systemctl --version
	I0224 13:24:15.228241  953268 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0224 13:24:15.383521  953268 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0224 13:24:15.390904  953268 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0224 13:24:15.390975  953268 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0224 13:24:15.409414  953268 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0224 13:24:15.409458  953268 start.go:495] detecting cgroup driver to use...
	I0224 13:24:15.409547  953268 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0224 13:24:15.428226  953268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0224 13:24:15.445215  953268 docker.go:217] disabling cri-docker service (if available) ...
	I0224 13:24:15.445288  953268 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0224 13:24:15.461254  953268 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0224 13:24:15.479752  953268 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0224 13:24:15.611241  953268 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0224 13:24:15.796285  953268 docker.go:233] disabling docker service ...
	I0224 13:24:15.796354  953268 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0224 13:24:15.814956  953268 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0224 13:24:15.829715  953268 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0224 13:24:15.963587  953268 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0224 13:24:16.098182  953268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0224 13:24:16.116095  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 13:24:16.138789  953268 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0224 13:24:16.138864  953268 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 13:24:16.152896  953268 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0224 13:24:16.152972  953268 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 13:24:16.167367  953268 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 13:24:16.180522  953268 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 13:24:16.193752  953268 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0224 13:24:16.207292  953268 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0224 13:24:16.219544  953268 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0224 13:24:16.219645  953268 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0224 13:24:16.236816  953268 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0224 13:24:16.249327  953268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 13:24:16.379843  953268 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0224 13:24:16.482556  953268 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0224 13:24:16.482653  953268 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0224 13:24:16.488699  953268 start.go:563] Will wait 60s for crictl version
	I0224 13:24:16.488775  953268 ssh_runner.go:195] Run: which crictl
	I0224 13:24:16.493234  953268 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0224 13:24:16.536042  953268 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0224 13:24:16.536127  953268 ssh_runner.go:195] Run: crio --version
	I0224 13:24:16.567549  953268 ssh_runner.go:195] Run: crio --version
	I0224 13:24:16.601282  953268 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0224 13:24:16.602750  953268 main.go:141] libmachine: (old-k8s-version-233759) Calling .GetIP
	I0224 13:24:16.606078  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:24:16.606451  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:a9:f6", ip: ""} in network mk-old-k8s-version-233759: {Iface:virbr2 ExpiryTime:2025-02-24 14:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:a9:f6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:old-k8s-version-233759 Clientid:01:52:54:00:cd:a9:f6}
	I0224 13:24:16.606487  953268 main.go:141] libmachine: (old-k8s-version-233759) DBG | domain old-k8s-version-233759 has defined IP address 192.168.50.62 and MAC address 52:54:00:cd:a9:f6 in network mk-old-k8s-version-233759
	I0224 13:24:16.606835  953268 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0224 13:24:16.612350  953268 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 13:24:16.628047  953268 kubeadm.go:883] updating cluster {Name:old-k8s-version-233759 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-
version-233759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.62 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0224 13:24:16.628242  953268 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0224 13:24:16.628295  953268 ssh_runner.go:195] Run: sudo crictl images --output json
	I0224 13:24:16.683800  953268 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0224 13:24:16.683885  953268 ssh_runner.go:195] Run: which lz4
	I0224 13:24:16.689059  953268 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0224 13:24:16.694586  953268 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0224 13:24:16.694630  953268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0224 13:24:18.587835  953268 crio.go:462] duration metric: took 1.89881054s to copy over tarball
	I0224 13:24:18.587927  953268 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0224 13:24:21.894416  953268 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.306422152s)
	I0224 13:24:21.894455  953268 crio.go:469] duration metric: took 3.306581974s to extract the tarball
	I0224 13:24:21.894466  953268 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0224 13:24:21.943585  953268 ssh_runner.go:195] Run: sudo crictl images --output json
	I0224 13:24:21.987100  953268 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0224 13:24:21.987137  953268 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0224 13:24:21.987251  953268 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0224 13:24:21.987290  953268 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0224 13:24:21.987291  953268 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0224 13:24:21.987362  953268 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0224 13:24:21.987374  953268 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0224 13:24:21.987286  953268 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0224 13:24:21.987922  953268 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0224 13:24:21.987954  953268 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0224 13:24:21.989508  953268 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0224 13:24:21.989520  953268 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0224 13:24:21.989655  953268 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0224 13:24:21.989681  953268 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0224 13:24:21.989903  953268 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0224 13:24:21.989942  953268 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0224 13:24:21.990034  953268 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0224 13:24:21.991935  953268 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0224 13:24:22.139176  953268 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0224 13:24:22.152984  953268 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0224 13:24:22.168346  953268 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0224 13:24:22.194223  953268 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0224 13:24:22.194298  953268 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0224 13:24:22.194359  953268 ssh_runner.go:195] Run: which crictl
	I0224 13:24:22.208857  953268 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0224 13:24:22.232512  953268 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0224 13:24:22.232563  953268 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0224 13:24:22.232626  953268 ssh_runner.go:195] Run: which crictl
	I0224 13:24:22.254687  953268 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0224 13:24:22.254775  953268 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0224 13:24:22.254824  953268 ssh_runner.go:195] Run: which crictl
	I0224 13:24:22.254829  953268 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0224 13:24:22.272621  953268 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0224 13:24:22.272684  953268 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0224 13:24:22.272736  953268 ssh_runner.go:195] Run: which crictl
	I0224 13:24:22.272736  953268 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0224 13:24:22.324894  953268 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0224 13:24:22.337237  953268 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0224 13:24:22.337330  953268 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0224 13:24:22.337397  953268 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0224 13:24:22.337397  953268 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0224 13:24:22.374352  953268 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0224 13:24:22.375965  953268 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0224 13:24:22.430848  953268 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0224 13:24:22.430909  953268 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0224 13:24:22.430971  953268 ssh_runner.go:195] Run: which crictl
	I0224 13:24:22.505543  953268 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0224 13:24:22.505593  953268 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0224 13:24:22.505656  953268 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0224 13:24:22.505712  953268 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0224 13:24:22.550685  953268 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0224 13:24:22.550751  953268 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0224 13:24:22.550809  953268 ssh_runner.go:195] Run: which crictl
	I0224 13:24:22.555255  953268 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0224 13:24:22.555304  953268 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0224 13:24:22.555334  953268 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0224 13:24:22.555340  953268 ssh_runner.go:195] Run: which crictl
	I0224 13:24:22.619870  953268 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20451-887294/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0224 13:24:22.619870  953268 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0224 13:24:22.653333  953268 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20451-887294/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0224 13:24:22.653432  953268 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0224 13:24:22.653443  953268 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0224 13:24:22.685791  953268 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0224 13:24:22.685838  953268 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0224 13:24:22.698786  953268 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20451-887294/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0224 13:24:22.756541  953268 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20451-887294/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0224 13:24:22.774249  953268 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0224 13:24:22.789263  953268 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0224 13:24:22.789282  953268 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0224 13:24:22.844429  953268 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0224 13:24:22.871707  953268 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20451-887294/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0224 13:24:22.871804  953268 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0224 13:24:22.919289  953268 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20451-887294/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0224 13:24:22.923453  953268 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20451-887294/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0224 13:24:23.174638  953268 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0224 13:24:23.334953  953268 cache_images.go:92] duration metric: took 1.347785498s to LoadCachedImages
	W0224 13:24:23.335083  953268 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20451-887294/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20451-887294/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0224 13:24:23.335108  953268 kubeadm.go:934] updating node { 192.168.50.62 8443 v1.20.0 crio true true} ...
	I0224 13:24:23.335260  953268 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-233759 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.62
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-233759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0224 13:24:23.335354  953268 ssh_runner.go:195] Run: crio config
	I0224 13:24:23.392488  953268 cni.go:84] Creating CNI manager for ""
	I0224 13:24:23.392516  953268 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0224 13:24:23.392532  953268 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0224 13:24:23.392552  953268 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.62 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-233759 NodeName:old-k8s-version-233759 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.62"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.62 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0224 13:24:23.392696  953268 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.62
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-233759"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.62
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.62"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0224 13:24:23.392789  953268 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0224 13:24:23.404233  953268 binaries.go:44] Found k8s binaries, skipping transfer
	I0224 13:24:23.404310  953268 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0224 13:24:23.414575  953268 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0224 13:24:23.433701  953268 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0224 13:24:23.453357  953268 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0224 13:24:23.473450  953268 ssh_runner.go:195] Run: grep 192.168.50.62	control-plane.minikube.internal$ /etc/hosts
	I0224 13:24:23.478317  953268 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.62	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 13:24:23.492487  953268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 13:24:23.640388  953268 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0224 13:24:23.661579  953268 certs.go:68] Setting up /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/old-k8s-version-233759 for IP: 192.168.50.62
	I0224 13:24:23.661612  953268 certs.go:194] generating shared ca certs ...
	I0224 13:24:23.661635  953268 certs.go:226] acquiring lock for ca certs: {Name:mk38777c6b180f63d1816020cff79a01106ddf33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 13:24:23.661865  953268 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20451-887294/.minikube/ca.key
	I0224 13:24:23.661927  953268 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20451-887294/.minikube/proxy-client-ca.key
	I0224 13:24:23.661937  953268 certs.go:256] generating profile certs ...
	I0224 13:24:23.662068  953268 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/old-k8s-version-233759/client.key
	I0224 13:24:23.662146  953268 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/old-k8s-version-233759/apiserver.key.e3d4d6a2
	I0224 13:24:23.662199  953268 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/old-k8s-version-233759/proxy-client.key
	I0224 13:24:23.662401  953268 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/894564.pem (1338 bytes)
	W0224 13:24:23.662459  953268 certs.go:480] ignoring /home/jenkins/minikube-integration/20451-887294/.minikube/certs/894564_empty.pem, impossibly tiny 0 bytes
	I0224 13:24:23.662474  953268 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca-key.pem (1679 bytes)
	I0224 13:24:23.662505  953268 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem (1082 bytes)
	I0224 13:24:23.662531  953268 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/cert.pem (1123 bytes)
	I0224 13:24:23.662551  953268 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/key.pem (1679 bytes)
	I0224 13:24:23.662606  953268 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/files/etc/ssl/certs/8945642.pem (1708 bytes)
	I0224 13:24:23.663550  953268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0224 13:24:23.711473  953268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0224 13:24:23.747768  953268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0224 13:24:23.794395  953268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0224 13:24:23.845291  953268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/old-k8s-version-233759/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0224 13:24:23.882390  953268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/old-k8s-version-233759/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0224 13:24:23.919577  953268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/old-k8s-version-233759/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0224 13:24:23.961330  953268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/old-k8s-version-233759/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0224 13:24:23.992077  953268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/certs/894564.pem --> /usr/share/ca-certificates/894564.pem (1338 bytes)
	I0224 13:24:24.021904  953268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/files/etc/ssl/certs/8945642.pem --> /usr/share/ca-certificates/8945642.pem (1708 bytes)
	I0224 13:24:24.052527  953268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0224 13:24:24.081003  953268 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0224 13:24:24.100237  953268 ssh_runner.go:195] Run: openssl version
	I0224 13:24:24.107141  953268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/894564.pem && ln -fs /usr/share/ca-certificates/894564.pem /etc/ssl/certs/894564.pem"
	I0224 13:24:24.120370  953268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/894564.pem
	I0224 13:24:24.126132  953268 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 24 12:09 /usr/share/ca-certificates/894564.pem
	I0224 13:24:24.126235  953268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/894564.pem
	I0224 13:24:24.132936  953268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/894564.pem /etc/ssl/certs/51391683.0"
	I0224 13:24:24.146699  953268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8945642.pem && ln -fs /usr/share/ca-certificates/8945642.pem /etc/ssl/certs/8945642.pem"
	I0224 13:24:24.161789  953268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8945642.pem
	I0224 13:24:24.168080  953268 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 24 12:09 /usr/share/ca-certificates/8945642.pem
	I0224 13:24:24.168145  953268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8945642.pem
	I0224 13:24:24.175637  953268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8945642.pem /etc/ssl/certs/3ec20f2e.0"
	I0224 13:24:24.188541  953268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0224 13:24:24.201823  953268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0224 13:24:24.207453  953268 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 24 12:01 /usr/share/ca-certificates/minikubeCA.pem
	I0224 13:24:24.207536  953268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0224 13:24:24.214728  953268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0224 13:24:24.228007  953268 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0224 13:24:24.233767  953268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0224 13:24:24.241384  953268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0224 13:24:24.248342  953268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0224 13:24:24.255213  953268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0224 13:24:24.262293  953268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0224 13:24:24.269080  953268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0224 13:24:24.276870  953268 kubeadm.go:392] StartCluster: {Name:old-k8s-version-233759 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-ver
sion-233759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.62 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0224 13:24:24.277014  953268 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0224 13:24:24.277079  953268 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0224 13:24:24.320033  953268 cri.go:89] found id: ""
	I0224 13:24:24.320123  953268 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0224 13:24:24.331520  953268 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0224 13:24:24.331548  953268 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0224 13:24:24.331611  953268 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0224 13:24:24.343491  953268 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0224 13:24:24.344901  953268 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-233759" does not appear in /home/jenkins/minikube-integration/20451-887294/kubeconfig
	I0224 13:24:24.345815  953268 kubeconfig.go:62] /home/jenkins/minikube-integration/20451-887294/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-233759" cluster setting kubeconfig missing "old-k8s-version-233759" context setting]
	I0224 13:24:24.347040  953268 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-887294/kubeconfig: {Name:mk0122b69f41cd40d5267f436266ccce22ce5ef2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 13:24:24.348699  953268 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0224 13:24:24.360598  953268 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.62
	I0224 13:24:24.360668  953268 kubeadm.go:1160] stopping kube-system containers ...
	I0224 13:24:24.360688  953268 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0224 13:24:24.360760  953268 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0224 13:24:24.406727  953268 cri.go:89] found id: ""
	I0224 13:24:24.406827  953268 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0224 13:24:24.426684  953268 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0224 13:24:24.439365  953268 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0224 13:24:24.439388  953268 kubeadm.go:157] found existing configuration files:
	
	I0224 13:24:24.439437  953268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0224 13:24:24.450101  953268 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0224 13:24:24.450187  953268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0224 13:24:24.461008  953268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0224 13:24:24.474554  953268 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0224 13:24:24.474640  953268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0224 13:24:24.486026  953268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0224 13:24:24.497973  953268 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0224 13:24:24.498044  953268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0224 13:24:24.508540  953268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0224 13:24:24.517995  953268 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0224 13:24:24.518062  953268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0224 13:24:24.528628  953268 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0224 13:24:24.539381  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0224 13:24:24.673374  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0224 13:24:25.737040  953268 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.063615445s)
	I0224 13:24:25.737089  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0224 13:24:26.005194  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0224 13:24:26.129767  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0224 13:24:26.224619  953268 api_server.go:52] waiting for apiserver process to appear ...
	I0224 13:24:26.224711  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:26.725348  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:27.224896  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:27.725633  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:28.224837  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:28.725687  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:29.224819  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:29.724966  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:30.225850  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:30.724914  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:31.225344  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:31.725416  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:32.225219  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:32.725792  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:33.225230  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:33.725250  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:34.225751  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:34.725653  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:35.225751  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:35.725729  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:36.224838  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:36.724917  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:37.225424  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:37.725187  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:38.224931  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:38.725330  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:39.225637  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:39.725211  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:40.225037  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:40.725057  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:41.225245  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:41.725041  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:42.225678  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:42.724917  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:43.225082  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:43.724821  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:44.225785  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:44.724891  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:45.225549  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:45.725616  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:46.225609  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:46.725355  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:47.225238  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:47.725258  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:48.224880  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:48.725848  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:49.224878  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:49.725632  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:50.225608  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:50.724824  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:51.225738  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:51.725761  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:52.225206  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:52.725285  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:53.225815  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:53.725175  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:54.225745  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:54.725844  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:55.225695  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:55.725145  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:56.225704  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:56.724896  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:57.225599  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:57.725148  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:58.225114  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:58.724912  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:59.225248  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:24:59.724856  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:00.225490  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:00.725696  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:01.224848  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:01.724910  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:02.224837  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:02.725101  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:03.225718  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:03.725114  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:04.225408  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:04.724901  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:05.225275  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:05.724917  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:06.224989  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:06.725834  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:07.225023  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:07.724885  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:08.225458  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:08.725699  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:09.225448  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:09.725197  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:10.225403  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:10.725839  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:11.225115  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:11.725260  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:12.225195  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:12.725462  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:13.225261  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:13.724952  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:14.225026  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:14.725459  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:15.225375  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:15.724813  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:16.225179  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:16.725004  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:17.225643  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:17.724880  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:18.225527  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:18.725319  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:19.224973  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:19.725415  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:20.225679  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:20.725235  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:21.225497  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:21.725016  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:22.224904  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:22.725280  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:23.225837  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:23.724847  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:24.224968  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:24.725266  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:25.225737  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:25.725024  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:26.224974  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:25:26.225087  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:25:26.266337  953268 cri.go:89] found id: ""
	I0224 13:25:26.266377  953268 logs.go:282] 0 containers: []
	W0224 13:25:26.266388  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:25:26.266396  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:25:26.266469  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:25:26.304705  953268 cri.go:89] found id: ""
	I0224 13:25:26.304735  953268 logs.go:282] 0 containers: []
	W0224 13:25:26.304745  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:25:26.304753  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:25:26.304851  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:25:26.344267  953268 cri.go:89] found id: ""
	I0224 13:25:26.344300  953268 logs.go:282] 0 containers: []
	W0224 13:25:26.344310  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:25:26.344317  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:25:26.344390  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:25:26.381886  953268 cri.go:89] found id: ""
	I0224 13:25:26.381920  953268 logs.go:282] 0 containers: []
	W0224 13:25:26.381929  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:25:26.381935  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:25:26.382013  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:25:26.420007  953268 cri.go:89] found id: ""
	I0224 13:25:26.420046  953268 logs.go:282] 0 containers: []
	W0224 13:25:26.420057  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:25:26.420066  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:25:26.420158  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:25:26.457266  953268 cri.go:89] found id: ""
	I0224 13:25:26.457301  953268 logs.go:282] 0 containers: []
	W0224 13:25:26.457324  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:25:26.457333  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:25:26.457394  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:25:26.496598  953268 cri.go:89] found id: ""
	I0224 13:25:26.496625  953268 logs.go:282] 0 containers: []
	W0224 13:25:26.496636  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:25:26.496642  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:25:26.496695  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:25:26.534094  953268 cri.go:89] found id: ""
	I0224 13:25:26.534146  953268 logs.go:282] 0 containers: []
	W0224 13:25:26.534159  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:25:26.534175  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:25:26.534193  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:25:26.588965  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:25:26.589009  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:25:26.605491  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:25:26.605521  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:25:26.757119  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:25:26.757152  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:25:26.757189  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:25:26.830934  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:25:26.830981  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:25:29.382562  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:29.396257  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:25:29.396356  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:25:29.444983  953268 cri.go:89] found id: ""
	I0224 13:25:29.445027  953268 logs.go:282] 0 containers: []
	W0224 13:25:29.445036  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:25:29.445043  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:25:29.445108  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:25:29.489635  953268 cri.go:89] found id: ""
	I0224 13:25:29.489662  953268 logs.go:282] 0 containers: []
	W0224 13:25:29.489672  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:25:29.489678  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:25:29.489744  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:25:29.540129  953268 cri.go:89] found id: ""
	I0224 13:25:29.540166  953268 logs.go:282] 0 containers: []
	W0224 13:25:29.540179  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:25:29.540188  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:25:29.540276  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:25:29.579515  953268 cri.go:89] found id: ""
	I0224 13:25:29.579543  953268 logs.go:282] 0 containers: []
	W0224 13:25:29.579554  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:25:29.579562  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:25:29.579632  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:25:29.614190  953268 cri.go:89] found id: ""
	I0224 13:25:29.614234  953268 logs.go:282] 0 containers: []
	W0224 13:25:29.614248  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:25:29.614256  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:25:29.614334  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:25:29.652003  953268 cri.go:89] found id: ""
	I0224 13:25:29.652036  953268 logs.go:282] 0 containers: []
	W0224 13:25:29.652046  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:25:29.652053  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:25:29.652112  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:25:29.692380  953268 cri.go:89] found id: ""
	I0224 13:25:29.692417  953268 logs.go:282] 0 containers: []
	W0224 13:25:29.692429  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:25:29.692446  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:25:29.692515  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:25:29.737255  953268 cri.go:89] found id: ""
	I0224 13:25:29.737292  953268 logs.go:282] 0 containers: []
	W0224 13:25:29.737313  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:25:29.737328  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:25:29.737355  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:25:29.793395  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:25:29.793445  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:25:29.808844  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:25:29.808882  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:25:29.887691  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:25:29.887721  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:25:29.887739  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:25:29.965266  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:25:29.965336  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:25:32.513471  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:32.528132  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:25:32.528245  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:25:32.570651  953268 cri.go:89] found id: ""
	I0224 13:25:32.570688  953268 logs.go:282] 0 containers: []
	W0224 13:25:32.570701  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:25:32.570710  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:25:32.570786  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:25:32.606453  953268 cri.go:89] found id: ""
	I0224 13:25:32.606493  953268 logs.go:282] 0 containers: []
	W0224 13:25:32.606504  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:25:32.606510  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:25:32.606569  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:25:32.643982  953268 cri.go:89] found id: ""
	I0224 13:25:32.644018  953268 logs.go:282] 0 containers: []
	W0224 13:25:32.644029  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:25:32.644037  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:25:32.644105  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:25:32.686371  953268 cri.go:89] found id: ""
	I0224 13:25:32.686403  953268 logs.go:282] 0 containers: []
	W0224 13:25:32.686412  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:25:32.686418  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:25:32.686483  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:25:32.728429  953268 cri.go:89] found id: ""
	I0224 13:25:32.728463  953268 logs.go:282] 0 containers: []
	W0224 13:25:32.728472  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:25:32.728479  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:25:32.728534  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:25:32.767516  953268 cri.go:89] found id: ""
	I0224 13:25:32.767545  953268 logs.go:282] 0 containers: []
	W0224 13:25:32.767554  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:25:32.767561  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:25:32.767645  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:25:32.804837  953268 cri.go:89] found id: ""
	I0224 13:25:32.804869  953268 logs.go:282] 0 containers: []
	W0224 13:25:32.804880  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:25:32.804888  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:25:32.804952  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:25:32.847058  953268 cri.go:89] found id: ""
	I0224 13:25:32.847092  953268 logs.go:282] 0 containers: []
	W0224 13:25:32.847104  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:25:32.847115  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:25:32.847134  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:25:32.926730  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:25:32.926765  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:25:32.926783  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:25:33.017934  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:25:33.017983  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:25:33.067447  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:25:33.067481  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:25:33.117775  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:25:33.117823  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:25:35.634196  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:35.649454  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:25:35.649555  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:25:35.693180  953268 cri.go:89] found id: ""
	I0224 13:25:35.693212  953268 logs.go:282] 0 containers: []
	W0224 13:25:35.693220  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:25:35.693226  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:25:35.693282  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:25:35.733939  953268 cri.go:89] found id: ""
	I0224 13:25:35.733980  953268 logs.go:282] 0 containers: []
	W0224 13:25:35.733990  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:25:35.733997  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:25:35.734061  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:25:35.771612  953268 cri.go:89] found id: ""
	I0224 13:25:35.771647  953268 logs.go:282] 0 containers: []
	W0224 13:25:35.771657  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:25:35.771663  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:25:35.771719  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:25:35.811384  953268 cri.go:89] found id: ""
	I0224 13:25:35.811416  953268 logs.go:282] 0 containers: []
	W0224 13:25:35.811424  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:25:35.811431  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:25:35.811490  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:25:35.850173  953268 cri.go:89] found id: ""
	I0224 13:25:35.850206  953268 logs.go:282] 0 containers: []
	W0224 13:25:35.850218  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:25:35.850225  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:25:35.850295  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:25:35.888464  953268 cri.go:89] found id: ""
	I0224 13:25:35.888498  953268 logs.go:282] 0 containers: []
	W0224 13:25:35.888507  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:25:35.888514  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:25:35.888565  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:25:35.932724  953268 cri.go:89] found id: ""
	I0224 13:25:35.932759  953268 logs.go:282] 0 containers: []
	W0224 13:25:35.932772  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:25:35.932780  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:25:35.932843  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:25:35.976672  953268 cri.go:89] found id: ""
	I0224 13:25:35.976706  953268 logs.go:282] 0 containers: []
	W0224 13:25:35.976716  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:25:35.976728  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:25:35.976746  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:25:36.028803  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:25:36.028860  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:25:36.044487  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:25:36.044528  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:25:36.127198  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:25:36.127229  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:25:36.127248  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:25:36.206925  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:25:36.206971  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:25:38.758263  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:38.771639  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:25:38.771732  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:25:38.809080  953268 cri.go:89] found id: ""
	I0224 13:25:38.809118  953268 logs.go:282] 0 containers: []
	W0224 13:25:38.809131  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:25:38.809140  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:25:38.809233  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:25:38.847020  953268 cri.go:89] found id: ""
	I0224 13:25:38.847056  953268 logs.go:282] 0 containers: []
	W0224 13:25:38.847066  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:25:38.847072  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:25:38.847123  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:25:38.892468  953268 cri.go:89] found id: ""
	I0224 13:25:38.892497  953268 logs.go:282] 0 containers: []
	W0224 13:25:38.892506  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:25:38.892512  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:25:38.892566  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:25:38.929806  953268 cri.go:89] found id: ""
	I0224 13:25:38.929838  953268 logs.go:282] 0 containers: []
	W0224 13:25:38.929850  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:25:38.929858  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:25:38.929967  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:25:38.969732  953268 cri.go:89] found id: ""
	I0224 13:25:38.969763  953268 logs.go:282] 0 containers: []
	W0224 13:25:38.969773  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:25:38.969782  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:25:38.969853  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:25:39.010280  953268 cri.go:89] found id: ""
	I0224 13:25:39.010317  953268 logs.go:282] 0 containers: []
	W0224 13:25:39.010330  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:25:39.010339  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:25:39.010417  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:25:39.050387  953268 cri.go:89] found id: ""
	I0224 13:25:39.050427  953268 logs.go:282] 0 containers: []
	W0224 13:25:39.050439  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:25:39.050447  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:25:39.050516  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:25:39.087216  953268 cri.go:89] found id: ""
	I0224 13:25:39.087250  953268 logs.go:282] 0 containers: []
	W0224 13:25:39.087259  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:25:39.087269  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:25:39.087281  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:25:39.139271  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:25:39.139318  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:25:39.156553  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:25:39.156600  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:25:39.236105  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:25:39.236129  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:25:39.236141  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:25:39.313932  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:25:39.313981  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:25:41.857590  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:41.871348  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:25:41.871423  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:25:41.916651  953268 cri.go:89] found id: ""
	I0224 13:25:41.916683  953268 logs.go:282] 0 containers: []
	W0224 13:25:41.916696  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:25:41.916704  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:25:41.916776  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:25:41.958035  953268 cri.go:89] found id: ""
	I0224 13:25:41.958066  953268 logs.go:282] 0 containers: []
	W0224 13:25:41.958074  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:25:41.958081  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:25:41.958134  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:25:41.999687  953268 cri.go:89] found id: ""
	I0224 13:25:41.999717  953268 logs.go:282] 0 containers: []
	W0224 13:25:41.999726  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:25:41.999733  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:25:41.999793  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:25:42.039529  953268 cri.go:89] found id: ""
	I0224 13:25:42.039558  953268 logs.go:282] 0 containers: []
	W0224 13:25:42.039567  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:25:42.039591  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:25:42.039647  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:25:42.076916  953268 cri.go:89] found id: ""
	I0224 13:25:42.076953  953268 logs.go:282] 0 containers: []
	W0224 13:25:42.076963  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:25:42.076969  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:25:42.077024  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:25:42.116729  953268 cri.go:89] found id: ""
	I0224 13:25:42.116766  953268 logs.go:282] 0 containers: []
	W0224 13:25:42.116777  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:25:42.116785  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:25:42.116858  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:25:42.157462  953268 cri.go:89] found id: ""
	I0224 13:25:42.157496  953268 logs.go:282] 0 containers: []
	W0224 13:25:42.157507  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:25:42.157516  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:25:42.157582  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:25:42.195986  953268 cri.go:89] found id: ""
	I0224 13:25:42.196014  953268 logs.go:282] 0 containers: []
	W0224 13:25:42.196023  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:25:42.196034  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:25:42.196049  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:25:42.210504  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:25:42.210547  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:25:42.291939  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:25:42.291968  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:25:42.291982  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:25:42.370359  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:25:42.370411  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:25:42.412678  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:25:42.412713  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:25:44.967924  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:44.984493  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:25:44.984575  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:25:45.026275  953268 cri.go:89] found id: ""
	I0224 13:25:45.026302  953268 logs.go:282] 0 containers: []
	W0224 13:25:45.026311  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:25:45.026317  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:25:45.026384  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:25:45.067597  953268 cri.go:89] found id: ""
	I0224 13:25:45.067629  953268 logs.go:282] 0 containers: []
	W0224 13:25:45.067639  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:25:45.067644  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:25:45.067697  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:25:45.104671  953268 cri.go:89] found id: ""
	I0224 13:25:45.104717  953268 logs.go:282] 0 containers: []
	W0224 13:25:45.104730  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:25:45.104739  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:25:45.104815  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:25:45.143610  953268 cri.go:89] found id: ""
	I0224 13:25:45.143641  953268 logs.go:282] 0 containers: []
	W0224 13:25:45.143653  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:25:45.143660  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:25:45.143728  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:25:45.189130  953268 cri.go:89] found id: ""
	I0224 13:25:45.189169  953268 logs.go:282] 0 containers: []
	W0224 13:25:45.189179  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:25:45.189186  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:25:45.189249  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:25:45.231913  953268 cri.go:89] found id: ""
	I0224 13:25:45.231955  953268 logs.go:282] 0 containers: []
	W0224 13:25:45.231968  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:25:45.231977  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:25:45.232057  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:25:45.269108  953268 cri.go:89] found id: ""
	I0224 13:25:45.269141  953268 logs.go:282] 0 containers: []
	W0224 13:25:45.269150  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:25:45.269157  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:25:45.269245  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:25:45.309605  953268 cri.go:89] found id: ""
	I0224 13:25:45.309633  953268 logs.go:282] 0 containers: []
	W0224 13:25:45.309642  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:25:45.309653  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:25:45.309666  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:25:45.323788  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:25:45.323824  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:25:45.404755  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:25:45.404802  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:25:45.404828  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:25:45.485102  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:25:45.485145  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:25:45.526358  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:25:45.526399  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:25:48.081460  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:48.095750  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:25:48.095845  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:25:48.134161  953268 cri.go:89] found id: ""
	I0224 13:25:48.134194  953268 logs.go:282] 0 containers: []
	W0224 13:25:48.134203  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:25:48.134209  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:25:48.134275  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:25:48.175740  953268 cri.go:89] found id: ""
	I0224 13:25:48.175779  953268 logs.go:282] 0 containers: []
	W0224 13:25:48.175795  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:25:48.175801  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:25:48.175860  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:25:48.215503  953268 cri.go:89] found id: ""
	I0224 13:25:48.215536  953268 logs.go:282] 0 containers: []
	W0224 13:25:48.215546  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:25:48.215552  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:25:48.215604  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:25:48.252834  953268 cri.go:89] found id: ""
	I0224 13:25:48.252869  953268 logs.go:282] 0 containers: []
	W0224 13:25:48.252880  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:25:48.252889  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:25:48.252940  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:25:48.293266  953268 cri.go:89] found id: ""
	I0224 13:25:48.293335  953268 logs.go:282] 0 containers: []
	W0224 13:25:48.293348  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:25:48.293355  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:25:48.293416  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:25:48.335583  953268 cri.go:89] found id: ""
	I0224 13:25:48.335629  953268 logs.go:282] 0 containers: []
	W0224 13:25:48.335641  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:25:48.335649  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:25:48.335742  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:25:48.374531  953268 cri.go:89] found id: ""
	I0224 13:25:48.374564  953268 logs.go:282] 0 containers: []
	W0224 13:25:48.374573  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:25:48.374579  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:25:48.374633  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:25:48.413164  953268 cri.go:89] found id: ""
	I0224 13:25:48.413199  953268 logs.go:282] 0 containers: []
	W0224 13:25:48.413210  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:25:48.413222  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:25:48.413235  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:25:48.467538  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:25:48.467586  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:25:48.484015  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:25:48.484055  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:25:48.556497  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:25:48.556533  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:25:48.556556  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:25:48.641086  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:25:48.641133  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:25:51.185506  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:51.199350  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:25:51.199437  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:25:51.240036  953268 cri.go:89] found id: ""
	I0224 13:25:51.240073  953268 logs.go:282] 0 containers: []
	W0224 13:25:51.240085  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:25:51.240094  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:25:51.240157  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:25:51.282480  953268 cri.go:89] found id: ""
	I0224 13:25:51.282526  953268 logs.go:282] 0 containers: []
	W0224 13:25:51.282538  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:25:51.282547  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:25:51.282617  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:25:51.321382  953268 cri.go:89] found id: ""
	I0224 13:25:51.321409  953268 logs.go:282] 0 containers: []
	W0224 13:25:51.321419  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:25:51.321428  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:25:51.321497  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:25:51.363395  953268 cri.go:89] found id: ""
	I0224 13:25:51.363433  953268 logs.go:282] 0 containers: []
	W0224 13:25:51.363446  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:25:51.363455  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:25:51.363526  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:25:51.401556  953268 cri.go:89] found id: ""
	I0224 13:25:51.401589  953268 logs.go:282] 0 containers: []
	W0224 13:25:51.401598  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:25:51.401603  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:25:51.401660  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:25:51.443253  953268 cri.go:89] found id: ""
	I0224 13:25:51.443294  953268 logs.go:282] 0 containers: []
	W0224 13:25:51.443305  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:25:51.443313  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:25:51.443383  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:25:51.482468  953268 cri.go:89] found id: ""
	I0224 13:25:51.482495  953268 logs.go:282] 0 containers: []
	W0224 13:25:51.482505  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:25:51.482511  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:25:51.482563  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:25:51.520747  953268 cri.go:89] found id: ""
	I0224 13:25:51.520785  953268 logs.go:282] 0 containers: []
	W0224 13:25:51.520796  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:25:51.520808  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:25:51.520821  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:25:51.607629  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:25:51.607654  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:25:51.607668  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:25:51.687898  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:25:51.687948  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:25:51.758974  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:25:51.759014  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:25:51.816143  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:25:51.816190  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:25:54.333488  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:54.348234  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:25:54.348375  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:25:54.387976  953268 cri.go:89] found id: ""
	I0224 13:25:54.388015  953268 logs.go:282] 0 containers: []
	W0224 13:25:54.388029  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:25:54.388039  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:25:54.388116  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:25:54.432083  953268 cri.go:89] found id: ""
	I0224 13:25:54.432122  953268 logs.go:282] 0 containers: []
	W0224 13:25:54.432135  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:25:54.432144  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:25:54.432225  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:25:54.472588  953268 cri.go:89] found id: ""
	I0224 13:25:54.472623  953268 logs.go:282] 0 containers: []
	W0224 13:25:54.472633  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:25:54.472640  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:25:54.472707  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:25:54.510882  953268 cri.go:89] found id: ""
	I0224 13:25:54.510925  953268 logs.go:282] 0 containers: []
	W0224 13:25:54.510934  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:25:54.510940  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:25:54.510996  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:25:54.549109  953268 cri.go:89] found id: ""
	I0224 13:25:54.549139  953268 logs.go:282] 0 containers: []
	W0224 13:25:54.549149  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:25:54.549157  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:25:54.549242  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:25:54.586033  953268 cri.go:89] found id: ""
	I0224 13:25:54.586069  953268 logs.go:282] 0 containers: []
	W0224 13:25:54.586080  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:25:54.586089  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:25:54.586159  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:25:54.622888  953268 cri.go:89] found id: ""
	I0224 13:25:54.622916  953268 logs.go:282] 0 containers: []
	W0224 13:25:54.622929  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:25:54.622936  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:25:54.623006  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:25:54.668967  953268 cri.go:89] found id: ""
	I0224 13:25:54.668997  953268 logs.go:282] 0 containers: []
	W0224 13:25:54.669005  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:25:54.669015  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:25:54.669030  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:25:54.745517  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:25:54.745575  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:25:54.789401  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:25:54.789453  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:25:54.844026  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:25:54.844065  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:25:54.860452  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:25:54.860486  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:25:54.937122  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:25:57.437485  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:25:57.468398  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:25:57.468485  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:25:57.514872  953268 cri.go:89] found id: ""
	I0224 13:25:57.514911  953268 logs.go:282] 0 containers: []
	W0224 13:25:57.514924  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:25:57.514931  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:25:57.515055  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:25:57.552867  953268 cri.go:89] found id: ""
	I0224 13:25:57.552902  953268 logs.go:282] 0 containers: []
	W0224 13:25:57.552915  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:25:57.552924  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:25:57.552992  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:25:57.590386  953268 cri.go:89] found id: ""
	I0224 13:25:57.590419  953268 logs.go:282] 0 containers: []
	W0224 13:25:57.590432  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:25:57.590441  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:25:57.590520  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:25:57.629436  953268 cri.go:89] found id: ""
	I0224 13:25:57.629479  953268 logs.go:282] 0 containers: []
	W0224 13:25:57.629488  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:25:57.629493  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:25:57.629545  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:25:57.668683  953268 cri.go:89] found id: ""
	I0224 13:25:57.668723  953268 logs.go:282] 0 containers: []
	W0224 13:25:57.668735  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:25:57.668743  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:25:57.668805  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:25:57.707312  953268 cri.go:89] found id: ""
	I0224 13:25:57.707349  953268 logs.go:282] 0 containers: []
	W0224 13:25:57.707361  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:25:57.707369  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:25:57.707432  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:25:57.744959  953268 cri.go:89] found id: ""
	I0224 13:25:57.745003  953268 logs.go:282] 0 containers: []
	W0224 13:25:57.745017  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:25:57.745025  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:25:57.745092  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:25:57.782938  953268 cri.go:89] found id: ""
	I0224 13:25:57.782977  953268 logs.go:282] 0 containers: []
	W0224 13:25:57.782990  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:25:57.783007  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:25:57.783024  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:25:57.867969  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:25:57.868021  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:25:57.917222  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:25:57.917263  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:25:57.969760  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:25:57.969812  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:25:57.987372  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:25:57.987417  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:25:58.064393  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:26:00.564624  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:26:00.580024  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:26:00.580092  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:26:00.618325  953268 cri.go:89] found id: ""
	I0224 13:26:00.618358  953268 logs.go:282] 0 containers: []
	W0224 13:26:00.618370  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:26:00.618378  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:26:00.618445  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:26:00.662064  953268 cri.go:89] found id: ""
	I0224 13:26:00.662098  953268 logs.go:282] 0 containers: []
	W0224 13:26:00.662109  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:26:00.662117  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:26:00.662186  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:26:00.700717  953268 cri.go:89] found id: ""
	I0224 13:26:00.700746  953268 logs.go:282] 0 containers: []
	W0224 13:26:00.700755  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:26:00.700768  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:26:00.700821  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:26:00.744432  953268 cri.go:89] found id: ""
	I0224 13:26:00.744462  953268 logs.go:282] 0 containers: []
	W0224 13:26:00.744471  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:26:00.744477  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:26:00.744530  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:26:00.783003  953268 cri.go:89] found id: ""
	I0224 13:26:00.783041  953268 logs.go:282] 0 containers: []
	W0224 13:26:00.783053  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:26:00.783066  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:26:00.783141  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:26:00.819124  953268 cri.go:89] found id: ""
	I0224 13:26:00.819161  953268 logs.go:282] 0 containers: []
	W0224 13:26:00.819175  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:26:00.819185  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:26:00.819246  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:26:00.860653  953268 cri.go:89] found id: ""
	I0224 13:26:00.860691  953268 logs.go:282] 0 containers: []
	W0224 13:26:00.860703  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:26:00.860711  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:26:00.860799  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:26:00.899625  953268 cri.go:89] found id: ""
	I0224 13:26:00.899656  953268 logs.go:282] 0 containers: []
	W0224 13:26:00.899666  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:26:00.899681  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:26:00.899698  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:26:00.977829  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:26:00.977863  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:26:00.977878  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:26:01.058142  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:26:01.058189  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:26:01.105339  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:26:01.105382  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:26:01.160009  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:26:01.160050  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:26:03.678232  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:26:03.693074  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:26:03.693242  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:26:03.732170  953268 cri.go:89] found id: ""
	I0224 13:26:03.732215  953268 logs.go:282] 0 containers: []
	W0224 13:26:03.732227  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:26:03.732236  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:26:03.732300  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:26:03.769447  953268 cri.go:89] found id: ""
	I0224 13:26:03.769481  953268 logs.go:282] 0 containers: []
	W0224 13:26:03.769494  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:26:03.769501  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:26:03.769567  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:26:03.812193  953268 cri.go:89] found id: ""
	I0224 13:26:03.812228  953268 logs.go:282] 0 containers: []
	W0224 13:26:03.812237  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:26:03.812243  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:26:03.812299  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:26:03.851491  953268 cri.go:89] found id: ""
	I0224 13:26:03.851528  953268 logs.go:282] 0 containers: []
	W0224 13:26:03.851537  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:26:03.851544  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:26:03.851608  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:26:03.889596  953268 cri.go:89] found id: ""
	I0224 13:26:03.889649  953268 logs.go:282] 0 containers: []
	W0224 13:26:03.889662  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:26:03.889670  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:26:03.889742  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:26:03.929086  953268 cri.go:89] found id: ""
	I0224 13:26:03.929114  953268 logs.go:282] 0 containers: []
	W0224 13:26:03.929123  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:26:03.929129  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:26:03.929220  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:26:03.968284  953268 cri.go:89] found id: ""
	I0224 13:26:03.968378  953268 logs.go:282] 0 containers: []
	W0224 13:26:03.968402  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:26:03.968411  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:26:03.968503  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:26:04.006137  953268 cri.go:89] found id: ""
	I0224 13:26:04.006192  953268 logs.go:282] 0 containers: []
	W0224 13:26:04.006202  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:26:04.006214  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:26:04.006228  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:26:04.060115  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:26:04.060160  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:26:04.081159  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:26:04.081190  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:26:04.187843  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:26:04.187869  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:26:04.187885  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:26:04.269026  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:26:04.269074  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:26:06.812177  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:26:06.826322  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:26:06.826413  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:26:06.869987  953268 cri.go:89] found id: ""
	I0224 13:26:06.870024  953268 logs.go:282] 0 containers: []
	W0224 13:26:06.870035  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:26:06.870044  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:26:06.870113  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:26:06.914381  953268 cri.go:89] found id: ""
	I0224 13:26:06.914412  953268 logs.go:282] 0 containers: []
	W0224 13:26:06.914435  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:26:06.914443  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:26:06.914539  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:26:06.953350  953268 cri.go:89] found id: ""
	I0224 13:26:06.953384  953268 logs.go:282] 0 containers: []
	W0224 13:26:06.953394  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:26:06.953402  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:26:06.953471  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:26:06.999330  953268 cri.go:89] found id: ""
	I0224 13:26:06.999366  953268 logs.go:282] 0 containers: []
	W0224 13:26:06.999378  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:26:06.999387  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:26:06.999457  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:26:07.041112  953268 cri.go:89] found id: ""
	I0224 13:26:07.041151  953268 logs.go:282] 0 containers: []
	W0224 13:26:07.041165  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:26:07.041173  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:26:07.041244  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:26:07.079947  953268 cri.go:89] found id: ""
	I0224 13:26:07.079980  953268 logs.go:282] 0 containers: []
	W0224 13:26:07.079990  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:26:07.079996  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:26:07.080055  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:26:07.120321  953268 cri.go:89] found id: ""
	I0224 13:26:07.120354  953268 logs.go:282] 0 containers: []
	W0224 13:26:07.120366  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:26:07.120374  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:26:07.120470  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:26:07.163085  953268 cri.go:89] found id: ""
	I0224 13:26:07.163117  953268 logs.go:282] 0 containers: []
	W0224 13:26:07.163128  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:26:07.163140  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:26:07.163160  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:26:07.218041  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:26:07.218090  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:26:07.235795  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:26:07.235827  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:26:07.315173  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:26:07.315223  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:26:07.315239  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:26:07.396284  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:26:07.396326  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:26:09.942406  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:26:09.959700  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:26:09.959787  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:26:10.023887  953268 cri.go:89] found id: ""
	I0224 13:26:10.023923  953268 logs.go:282] 0 containers: []
	W0224 13:26:10.023936  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:26:10.023945  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:26:10.024014  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:26:10.085902  953268 cri.go:89] found id: ""
	I0224 13:26:10.085942  953268 logs.go:282] 0 containers: []
	W0224 13:26:10.085954  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:26:10.085962  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:26:10.086030  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:26:10.130763  953268 cri.go:89] found id: ""
	I0224 13:26:10.130800  953268 logs.go:282] 0 containers: []
	W0224 13:26:10.130812  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:26:10.130820  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:26:10.130884  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:26:10.169671  953268 cri.go:89] found id: ""
	I0224 13:26:10.169699  953268 logs.go:282] 0 containers: []
	W0224 13:26:10.169709  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:26:10.169719  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:26:10.169787  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:26:10.208011  953268 cri.go:89] found id: ""
	I0224 13:26:10.208040  953268 logs.go:282] 0 containers: []
	W0224 13:26:10.208048  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:26:10.208054  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:26:10.208116  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:26:10.246230  953268 cri.go:89] found id: ""
	I0224 13:26:10.246261  953268 logs.go:282] 0 containers: []
	W0224 13:26:10.246273  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:26:10.246281  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:26:10.246341  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:26:10.284246  953268 cri.go:89] found id: ""
	I0224 13:26:10.284282  953268 logs.go:282] 0 containers: []
	W0224 13:26:10.284293  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:26:10.284304  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:26:10.284369  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:26:10.324204  953268 cri.go:89] found id: ""
	I0224 13:26:10.324233  953268 logs.go:282] 0 containers: []
	W0224 13:26:10.324242  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:26:10.324253  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:26:10.324266  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:26:10.377376  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:26:10.377428  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:26:10.393071  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:26:10.393110  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:26:10.474310  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:26:10.474331  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:26:10.474348  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:26:10.549896  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:26:10.549943  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:26:13.097546  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:26:13.112329  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:26:13.112418  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:26:13.152601  953268 cri.go:89] found id: ""
	I0224 13:26:13.152637  953268 logs.go:282] 0 containers: []
	W0224 13:26:13.152650  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:26:13.152665  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:26:13.152738  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:26:13.196458  953268 cri.go:89] found id: ""
	I0224 13:26:13.196493  953268 logs.go:282] 0 containers: []
	W0224 13:26:13.196502  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:26:13.196508  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:26:13.196562  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:26:13.239034  953268 cri.go:89] found id: ""
	I0224 13:26:13.239063  953268 logs.go:282] 0 containers: []
	W0224 13:26:13.239073  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:26:13.239079  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:26:13.239167  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:26:13.278757  953268 cri.go:89] found id: ""
	I0224 13:26:13.278794  953268 logs.go:282] 0 containers: []
	W0224 13:26:13.278805  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:26:13.278813  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:26:13.278868  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:26:13.315827  953268 cri.go:89] found id: ""
	I0224 13:26:13.315864  953268 logs.go:282] 0 containers: []
	W0224 13:26:13.315877  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:26:13.315884  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:26:13.315948  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:26:13.355141  953268 cri.go:89] found id: ""
	I0224 13:26:13.355170  953268 logs.go:282] 0 containers: []
	W0224 13:26:13.355180  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:26:13.355187  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:26:13.355244  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:26:13.394859  953268 cri.go:89] found id: ""
	I0224 13:26:13.394902  953268 logs.go:282] 0 containers: []
	W0224 13:26:13.394912  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:26:13.394918  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:26:13.394999  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:26:13.432337  953268 cri.go:89] found id: ""
	I0224 13:26:13.432373  953268 logs.go:282] 0 containers: []
	W0224 13:26:13.432386  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:26:13.432400  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:26:13.432431  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:26:13.477737  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:26:13.477769  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:26:13.530899  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:26:13.530950  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:26:13.546069  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:26:13.546106  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:26:13.624918  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:26:13.624946  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:26:13.624960  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:26:16.204536  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:26:16.219584  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:26:16.219662  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:26:16.259144  953268 cri.go:89] found id: ""
	I0224 13:26:16.259176  953268 logs.go:282] 0 containers: []
	W0224 13:26:16.259185  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:26:16.259191  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:26:16.259252  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:26:16.298085  953268 cri.go:89] found id: ""
	I0224 13:26:16.298125  953268 logs.go:282] 0 containers: []
	W0224 13:26:16.298151  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:26:16.298157  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:26:16.298224  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:26:16.339201  953268 cri.go:89] found id: ""
	I0224 13:26:16.339226  953268 logs.go:282] 0 containers: []
	W0224 13:26:16.339233  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:26:16.339239  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:26:16.339290  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:26:16.379920  953268 cri.go:89] found id: ""
	I0224 13:26:16.379953  953268 logs.go:282] 0 containers: []
	W0224 13:26:16.379963  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:26:16.379973  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:26:16.380043  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:26:16.417746  953268 cri.go:89] found id: ""
	I0224 13:26:16.417780  953268 logs.go:282] 0 containers: []
	W0224 13:26:16.417791  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:26:16.417800  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:26:16.418125  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:26:16.462604  953268 cri.go:89] found id: ""
	I0224 13:26:16.462636  953268 logs.go:282] 0 containers: []
	W0224 13:26:16.462645  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:26:16.462654  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:26:16.462724  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:26:16.502981  953268 cri.go:89] found id: ""
	I0224 13:26:16.503025  953268 logs.go:282] 0 containers: []
	W0224 13:26:16.503037  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:26:16.503048  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:26:16.503126  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:26:16.543474  953268 cri.go:89] found id: ""
	I0224 13:26:16.543505  953268 logs.go:282] 0 containers: []
	W0224 13:26:16.543515  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:26:16.543529  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:26:16.543546  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:26:16.586909  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:26:16.586953  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:26:16.639909  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:26:16.639951  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:26:16.655033  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:26:16.655075  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:26:16.731220  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:26:16.731252  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:26:16.731267  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:26:19.313099  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:26:19.327836  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:26:19.327918  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:26:19.371689  953268 cri.go:89] found id: ""
	I0224 13:26:19.371726  953268 logs.go:282] 0 containers: []
	W0224 13:26:19.371738  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:26:19.371745  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:26:19.371815  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:26:19.410712  953268 cri.go:89] found id: ""
	I0224 13:26:19.410749  953268 logs.go:282] 0 containers: []
	W0224 13:26:19.410762  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:26:19.410769  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:26:19.410828  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:26:19.451475  953268 cri.go:89] found id: ""
	I0224 13:26:19.451508  953268 logs.go:282] 0 containers: []
	W0224 13:26:19.451517  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:26:19.451524  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:26:19.451579  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:26:19.492733  953268 cri.go:89] found id: ""
	I0224 13:26:19.492762  953268 logs.go:282] 0 containers: []
	W0224 13:26:19.492771  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:26:19.492777  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:26:19.492835  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:26:19.529853  953268 cri.go:89] found id: ""
	I0224 13:26:19.529900  953268 logs.go:282] 0 containers: []
	W0224 13:26:19.529914  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:26:19.529922  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:26:19.529983  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:26:19.567061  953268 cri.go:89] found id: ""
	I0224 13:26:19.567095  953268 logs.go:282] 0 containers: []
	W0224 13:26:19.567106  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:26:19.567115  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:26:19.567183  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:26:19.608263  953268 cri.go:89] found id: ""
	I0224 13:26:19.608301  953268 logs.go:282] 0 containers: []
	W0224 13:26:19.608313  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:26:19.608323  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:26:19.608402  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:26:19.645876  953268 cri.go:89] found id: ""
	I0224 13:26:19.645909  953268 logs.go:282] 0 containers: []
	W0224 13:26:19.645921  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:26:19.645936  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:26:19.645952  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:26:19.725831  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:26:19.725881  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:26:19.767848  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:26:19.767881  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:26:19.821062  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:26:19.821107  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:26:19.836538  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:26:19.836576  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:26:19.908782  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:26:22.409476  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:26:22.426358  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:26:22.426467  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:26:22.468229  953268 cri.go:89] found id: ""
	I0224 13:26:22.468263  953268 logs.go:282] 0 containers: []
	W0224 13:26:22.468275  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:26:22.468283  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:26:22.468340  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:26:22.511741  953268 cri.go:89] found id: ""
	I0224 13:26:22.511782  953268 logs.go:282] 0 containers: []
	W0224 13:26:22.511794  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:26:22.511805  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:26:22.511862  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:26:22.553794  953268 cri.go:89] found id: ""
	I0224 13:26:22.553834  953268 logs.go:282] 0 containers: []
	W0224 13:26:22.553849  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:26:22.553857  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:26:22.553939  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:26:22.594016  953268 cri.go:89] found id: ""
	I0224 13:26:22.594047  953268 logs.go:282] 0 containers: []
	W0224 13:26:22.594057  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:26:22.594067  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:26:22.594151  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:26:22.632357  953268 cri.go:89] found id: ""
	I0224 13:26:22.632389  953268 logs.go:282] 0 containers: []
	W0224 13:26:22.632399  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:26:22.632407  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:26:22.632484  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:26:22.673218  953268 cri.go:89] found id: ""
	I0224 13:26:22.673257  953268 logs.go:282] 0 containers: []
	W0224 13:26:22.673268  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:26:22.673279  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:26:22.673369  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:26:22.711127  953268 cri.go:89] found id: ""
	I0224 13:26:22.711159  953268 logs.go:282] 0 containers: []
	W0224 13:26:22.711171  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:26:22.711184  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:26:22.711266  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:26:22.751699  953268 cri.go:89] found id: ""
	I0224 13:26:22.751736  953268 logs.go:282] 0 containers: []
	W0224 13:26:22.751748  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:26:22.751763  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:26:22.751781  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:26:22.804033  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:26:22.804082  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:26:22.818827  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:26:22.818867  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:26:22.897110  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:26:22.897138  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:26:22.897150  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:26:22.980332  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:26:22.980377  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:26:25.526354  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:26:25.540587  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:26:25.540665  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:26:25.578108  953268 cri.go:89] found id: ""
	I0224 13:26:25.578140  953268 logs.go:282] 0 containers: []
	W0224 13:26:25.578152  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:26:25.578166  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:26:25.578238  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:26:25.615624  953268 cri.go:89] found id: ""
	I0224 13:26:25.615652  953268 logs.go:282] 0 containers: []
	W0224 13:26:25.615662  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:26:25.615668  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:26:25.615722  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:26:25.652773  953268 cri.go:89] found id: ""
	I0224 13:26:25.652809  953268 logs.go:282] 0 containers: []
	W0224 13:26:25.652821  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:26:25.652830  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:26:25.652901  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:26:25.691329  953268 cri.go:89] found id: ""
	I0224 13:26:25.691363  953268 logs.go:282] 0 containers: []
	W0224 13:26:25.691373  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:26:25.691382  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:26:25.691454  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:26:25.730058  953268 cri.go:89] found id: ""
	I0224 13:26:25.730090  953268 logs.go:282] 0 containers: []
	W0224 13:26:25.730099  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:26:25.730106  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:26:25.730174  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:26:25.772611  953268 cri.go:89] found id: ""
	I0224 13:26:25.772644  953268 logs.go:282] 0 containers: []
	W0224 13:26:25.772655  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:26:25.772664  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:26:25.772740  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:26:25.813497  953268 cri.go:89] found id: ""
	I0224 13:26:25.813538  953268 logs.go:282] 0 containers: []
	W0224 13:26:25.813550  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:26:25.813559  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:26:25.813631  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:26:25.852959  953268 cri.go:89] found id: ""
	I0224 13:26:25.853002  953268 logs.go:282] 0 containers: []
	W0224 13:26:25.853014  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:26:25.853029  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:26:25.853046  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:26:25.912218  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:26:25.912267  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:26:25.928685  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:26:25.928722  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:26:26.014709  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:26:26.014732  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:26:26.014747  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:26:26.099619  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:26:26.099662  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:26:28.642543  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:26:28.656846  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:26:28.656925  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:26:28.694027  953268 cri.go:89] found id: ""
	I0224 13:26:28.694062  953268 logs.go:282] 0 containers: []
	W0224 13:26:28.694073  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:26:28.694081  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:26:28.694150  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:26:28.735389  953268 cri.go:89] found id: ""
	I0224 13:26:28.735423  953268 logs.go:282] 0 containers: []
	W0224 13:26:28.735432  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:26:28.735438  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:26:28.735518  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:26:28.772102  953268 cri.go:89] found id: ""
	I0224 13:26:28.772131  953268 logs.go:282] 0 containers: []
	W0224 13:26:28.772140  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:26:28.772146  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:26:28.772204  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:26:28.813362  953268 cri.go:89] found id: ""
	I0224 13:26:28.813396  953268 logs.go:282] 0 containers: []
	W0224 13:26:28.813405  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:26:28.813411  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:26:28.813483  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:26:28.852358  953268 cri.go:89] found id: ""
	I0224 13:26:28.852393  953268 logs.go:282] 0 containers: []
	W0224 13:26:28.852403  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:26:28.852409  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:26:28.852481  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:26:28.895398  953268 cri.go:89] found id: ""
	I0224 13:26:28.895447  953268 logs.go:282] 0 containers: []
	W0224 13:26:28.895461  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:26:28.895469  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:26:28.895539  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:26:28.933655  953268 cri.go:89] found id: ""
	I0224 13:26:28.933685  953268 logs.go:282] 0 containers: []
	W0224 13:26:28.933697  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:26:28.933705  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:26:28.933776  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:26:28.970697  953268 cri.go:89] found id: ""
	I0224 13:26:28.970736  953268 logs.go:282] 0 containers: []
	W0224 13:26:28.970749  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:26:28.970762  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:26:28.970779  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:26:29.028116  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:26:29.028167  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:26:29.045414  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:26:29.045454  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:26:29.119283  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:26:29.119315  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:26:29.119332  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:26:29.205866  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:26:29.205915  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:26:31.753478  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:26:31.767102  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:26:31.767190  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:26:31.807458  953268 cri.go:89] found id: ""
	I0224 13:26:31.807508  953268 logs.go:282] 0 containers: []
	W0224 13:26:31.807521  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:26:31.807529  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:26:31.807666  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:26:31.855686  953268 cri.go:89] found id: ""
	I0224 13:26:31.855722  953268 logs.go:282] 0 containers: []
	W0224 13:26:31.855735  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:26:31.855743  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:26:31.855810  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:26:31.892364  953268 cri.go:89] found id: ""
	I0224 13:26:31.892403  953268 logs.go:282] 0 containers: []
	W0224 13:26:31.892413  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:26:31.892420  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:26:31.892485  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:26:31.933490  953268 cri.go:89] found id: ""
	I0224 13:26:31.933526  953268 logs.go:282] 0 containers: []
	W0224 13:26:31.933538  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:26:31.933546  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:26:31.933625  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:26:31.971777  953268 cri.go:89] found id: ""
	I0224 13:26:31.971807  953268 logs.go:282] 0 containers: []
	W0224 13:26:31.971818  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:26:31.971825  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:26:31.971891  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:26:32.010598  953268 cri.go:89] found id: ""
	I0224 13:26:32.010633  953268 logs.go:282] 0 containers: []
	W0224 13:26:32.010645  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:26:32.010653  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:26:32.010716  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:26:32.051747  953268 cri.go:89] found id: ""
	I0224 13:26:32.051786  953268 logs.go:282] 0 containers: []
	W0224 13:26:32.051807  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:26:32.051816  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:26:32.051892  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:26:32.091892  953268 cri.go:89] found id: ""
	I0224 13:26:32.091928  953268 logs.go:282] 0 containers: []
	W0224 13:26:32.091941  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:26:32.091954  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:26:32.091970  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:26:32.146831  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:26:32.146880  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:26:32.164229  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:26:32.164265  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:26:32.249197  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:26:32.249232  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:26:32.249250  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:26:32.334788  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:26:32.334831  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:26:34.880621  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:26:34.895176  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:26:34.895265  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:26:34.938040  953268 cri.go:89] found id: ""
	I0224 13:26:34.938069  953268 logs.go:282] 0 containers: []
	W0224 13:26:34.938078  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:26:34.938084  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:26:34.938153  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:26:34.979879  953268 cri.go:89] found id: ""
	I0224 13:26:34.979910  953268 logs.go:282] 0 containers: []
	W0224 13:26:34.979918  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:26:34.979924  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:26:34.979981  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:26:35.022708  953268 cri.go:89] found id: ""
	I0224 13:26:35.022742  953268 logs.go:282] 0 containers: []
	W0224 13:26:35.022753  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:26:35.022762  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:26:35.022831  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:26:35.072848  953268 cri.go:89] found id: ""
	I0224 13:26:35.072880  953268 logs.go:282] 0 containers: []
	W0224 13:26:35.072891  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:26:35.072904  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:26:35.072973  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:26:35.113191  953268 cri.go:89] found id: ""
	I0224 13:26:35.113226  953268 logs.go:282] 0 containers: []
	W0224 13:26:35.113237  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:26:35.113245  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:26:35.113326  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:26:35.151032  953268 cri.go:89] found id: ""
	I0224 13:26:35.151072  953268 logs.go:282] 0 containers: []
	W0224 13:26:35.151083  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:26:35.151092  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:26:35.151176  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:26:35.190190  953268 cri.go:89] found id: ""
	I0224 13:26:35.190220  953268 logs.go:282] 0 containers: []
	W0224 13:26:35.190230  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:26:35.190236  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:26:35.190294  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:26:35.229949  953268 cri.go:89] found id: ""
	I0224 13:26:35.229981  953268 logs.go:282] 0 containers: []
	W0224 13:26:35.229994  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:26:35.230008  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:26:35.230026  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:26:35.306380  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:26:35.306412  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:26:35.306434  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:26:35.392426  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:26:35.392474  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:26:35.439757  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:26:35.439797  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:26:35.491656  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:26:35.491701  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:26:38.008213  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:26:38.022777  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:26:38.022872  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:26:38.060125  953268 cri.go:89] found id: ""
	I0224 13:26:38.060165  953268 logs.go:282] 0 containers: []
	W0224 13:26:38.060198  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:26:38.060207  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:26:38.060285  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:26:38.097630  953268 cri.go:89] found id: ""
	I0224 13:26:38.097665  953268 logs.go:282] 0 containers: []
	W0224 13:26:38.097677  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:26:38.097686  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:26:38.097763  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:26:38.135687  953268 cri.go:89] found id: ""
	I0224 13:26:38.135722  953268 logs.go:282] 0 containers: []
	W0224 13:26:38.135736  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:26:38.135745  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:26:38.135823  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:26:38.184731  953268 cri.go:89] found id: ""
	I0224 13:26:38.184763  953268 logs.go:282] 0 containers: []
	W0224 13:26:38.184773  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:26:38.184779  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:26:38.184833  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:26:38.249840  953268 cri.go:89] found id: ""
	I0224 13:26:38.249882  953268 logs.go:282] 0 containers: []
	W0224 13:26:38.249894  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:26:38.249903  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:26:38.249976  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:26:38.292696  953268 cri.go:89] found id: ""
	I0224 13:26:38.292737  953268 logs.go:282] 0 containers: []
	W0224 13:26:38.292751  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:26:38.292762  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:26:38.292834  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:26:38.338533  953268 cri.go:89] found id: ""
	I0224 13:26:38.338565  953268 logs.go:282] 0 containers: []
	W0224 13:26:38.338577  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:26:38.338585  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:26:38.338656  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:26:38.381759  953268 cri.go:89] found id: ""
	I0224 13:26:38.381793  953268 logs.go:282] 0 containers: []
	W0224 13:26:38.381804  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:26:38.381815  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:26:38.381828  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:26:38.432075  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:26:38.432120  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:26:38.449720  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:26:38.449751  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:26:38.533296  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:26:38.533339  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:26:38.533355  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:26:38.614899  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:26:38.614957  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:26:41.158596  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:26:41.174364  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:26:41.174487  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:26:41.215352  953268 cri.go:89] found id: ""
	I0224 13:26:41.215386  953268 logs.go:282] 0 containers: []
	W0224 13:26:41.215397  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:26:41.215405  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:26:41.215472  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:26:41.256010  953268 cri.go:89] found id: ""
	I0224 13:26:41.256048  953268 logs.go:282] 0 containers: []
	W0224 13:26:41.256059  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:26:41.256073  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:26:41.256127  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:26:41.293442  953268 cri.go:89] found id: ""
	I0224 13:26:41.293478  953268 logs.go:282] 0 containers: []
	W0224 13:26:41.293488  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:26:41.293495  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:26:41.293562  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:26:41.334094  953268 cri.go:89] found id: ""
	I0224 13:26:41.334134  953268 logs.go:282] 0 containers: []
	W0224 13:26:41.334167  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:26:41.334178  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:26:41.334248  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:26:41.380610  953268 cri.go:89] found id: ""
	I0224 13:26:41.380647  953268 logs.go:282] 0 containers: []
	W0224 13:26:41.380658  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:26:41.380666  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:26:41.380736  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:26:41.420166  953268 cri.go:89] found id: ""
	I0224 13:26:41.420257  953268 logs.go:282] 0 containers: []
	W0224 13:26:41.420274  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:26:41.420283  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:26:41.420359  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:26:41.458072  953268 cri.go:89] found id: ""
	I0224 13:26:41.458109  953268 logs.go:282] 0 containers: []
	W0224 13:26:41.458123  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:26:41.458131  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:26:41.458192  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:26:41.496876  953268 cri.go:89] found id: ""
	I0224 13:26:41.496914  953268 logs.go:282] 0 containers: []
	W0224 13:26:41.496927  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:26:41.496941  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:26:41.496958  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:26:41.579558  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:26:41.579602  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:26:41.622816  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:26:41.622848  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:26:41.677032  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:26:41.677087  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:26:41.691995  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:26:41.692027  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:26:41.766258  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:26:44.267989  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:26:44.283623  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:26:44.283710  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:26:44.321850  953268 cri.go:89] found id: ""
	I0224 13:26:44.321894  953268 logs.go:282] 0 containers: []
	W0224 13:26:44.321907  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:26:44.321917  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:26:44.321987  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:26:44.360343  953268 cri.go:89] found id: ""
	I0224 13:26:44.360376  953268 logs.go:282] 0 containers: []
	W0224 13:26:44.360386  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:26:44.360394  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:26:44.360463  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:26:44.400297  953268 cri.go:89] found id: ""
	I0224 13:26:44.400325  953268 logs.go:282] 0 containers: []
	W0224 13:26:44.400338  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:26:44.400346  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:26:44.400405  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:26:44.440306  953268 cri.go:89] found id: ""
	I0224 13:26:44.440375  953268 logs.go:282] 0 containers: []
	W0224 13:26:44.440386  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:26:44.440393  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:26:44.440461  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:26:44.480394  953268 cri.go:89] found id: ""
	I0224 13:26:44.480423  953268 logs.go:282] 0 containers: []
	W0224 13:26:44.480443  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:26:44.480461  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:26:44.480547  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:26:44.518509  953268 cri.go:89] found id: ""
	I0224 13:26:44.518545  953268 logs.go:282] 0 containers: []
	W0224 13:26:44.518558  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:26:44.518568  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:26:44.518641  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:26:44.564970  953268 cri.go:89] found id: ""
	I0224 13:26:44.565010  953268 logs.go:282] 0 containers: []
	W0224 13:26:44.565019  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:26:44.565026  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:26:44.565092  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:26:44.622157  953268 cri.go:89] found id: ""
	I0224 13:26:44.622192  953268 logs.go:282] 0 containers: []
	W0224 13:26:44.622204  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:26:44.622236  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:26:44.622254  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:26:44.689473  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:26:44.689524  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:26:44.760658  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:26:44.760704  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:26:44.779903  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:26:44.779956  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:26:44.866566  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:26:44.866621  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:26:44.866644  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:26:47.450504  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:26:47.464293  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:26:47.464388  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:26:47.503068  953268 cri.go:89] found id: ""
	I0224 13:26:47.503100  953268 logs.go:282] 0 containers: []
	W0224 13:26:47.503109  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:26:47.503115  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:26:47.503237  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:26:47.544041  953268 cri.go:89] found id: ""
	I0224 13:26:47.544077  953268 logs.go:282] 0 containers: []
	W0224 13:26:47.544089  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:26:47.544097  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:26:47.544163  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:26:47.586402  953268 cri.go:89] found id: ""
	I0224 13:26:47.586452  953268 logs.go:282] 0 containers: []
	W0224 13:26:47.586465  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:26:47.586475  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:26:47.586541  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:26:47.629653  953268 cri.go:89] found id: ""
	I0224 13:26:47.629688  953268 logs.go:282] 0 containers: []
	W0224 13:26:47.629701  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:26:47.629709  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:26:47.629776  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:26:47.667644  953268 cri.go:89] found id: ""
	I0224 13:26:47.667683  953268 logs.go:282] 0 containers: []
	W0224 13:26:47.667696  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:26:47.667706  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:26:47.667779  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:26:47.708016  953268 cri.go:89] found id: ""
	I0224 13:26:47.708050  953268 logs.go:282] 0 containers: []
	W0224 13:26:47.708066  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:26:47.708072  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:26:47.708132  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:26:47.746341  953268 cri.go:89] found id: ""
	I0224 13:26:47.746370  953268 logs.go:282] 0 containers: []
	W0224 13:26:47.746380  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:26:47.746386  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:26:47.746444  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:26:47.787076  953268 cri.go:89] found id: ""
	I0224 13:26:47.787115  953268 logs.go:282] 0 containers: []
	W0224 13:26:47.787127  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:26:47.787140  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:26:47.787154  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:26:47.803360  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:26:47.803392  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:26:47.883995  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:26:47.884023  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:26:47.884036  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:26:47.966832  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:26:47.966874  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:26:48.024187  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:26:48.024248  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:26:50.576021  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:26:50.592999  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:26:50.593097  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:26:50.647641  953268 cri.go:89] found id: ""
	I0224 13:26:50.647688  953268 logs.go:282] 0 containers: []
	W0224 13:26:50.647703  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:26:50.647712  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:26:50.647782  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:26:50.710102  953268 cri.go:89] found id: ""
	I0224 13:26:50.710146  953268 logs.go:282] 0 containers: []
	W0224 13:26:50.710158  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:26:50.710167  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:26:50.710225  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:26:50.754995  953268 cri.go:89] found id: ""
	I0224 13:26:50.755028  953268 logs.go:282] 0 containers: []
	W0224 13:26:50.755041  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:26:50.755049  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:26:50.755141  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:26:50.797287  953268 cri.go:89] found id: ""
	I0224 13:26:50.797347  953268 logs.go:282] 0 containers: []
	W0224 13:26:50.797360  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:26:50.797368  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:26:50.797431  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:26:50.839878  953268 cri.go:89] found id: ""
	I0224 13:26:50.839911  953268 logs.go:282] 0 containers: []
	W0224 13:26:50.839920  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:26:50.839926  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:26:50.839984  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:26:50.878997  953268 cri.go:89] found id: ""
	I0224 13:26:50.879027  953268 logs.go:282] 0 containers: []
	W0224 13:26:50.879037  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:26:50.879054  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:26:50.879119  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:26:50.925642  953268 cri.go:89] found id: ""
	I0224 13:26:50.925672  953268 logs.go:282] 0 containers: []
	W0224 13:26:50.925680  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:26:50.925687  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:26:50.925751  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:26:50.965288  953268 cri.go:89] found id: ""
	I0224 13:26:50.965325  953268 logs.go:282] 0 containers: []
	W0224 13:26:50.965334  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:26:50.965345  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:26:50.965357  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:26:51.042699  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:26:51.042752  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:26:51.083924  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:26:51.083960  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:26:51.135605  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:26:51.135650  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:26:51.151416  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:26:51.151463  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:26:51.227963  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:26:53.729453  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:26:53.743627  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:26:53.743707  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:26:53.784288  953268 cri.go:89] found id: ""
	I0224 13:26:53.784328  953268 logs.go:282] 0 containers: []
	W0224 13:26:53.784338  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:26:53.784345  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:26:53.784398  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:26:53.822425  953268 cri.go:89] found id: ""
	I0224 13:26:53.822454  953268 logs.go:282] 0 containers: []
	W0224 13:26:53.822468  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:26:53.822474  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:26:53.822527  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:26:53.865024  953268 cri.go:89] found id: ""
	I0224 13:26:53.865062  953268 logs.go:282] 0 containers: []
	W0224 13:26:53.865075  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:26:53.865083  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:26:53.865165  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:26:53.903371  953268 cri.go:89] found id: ""
	I0224 13:26:53.903412  953268 logs.go:282] 0 containers: []
	W0224 13:26:53.903425  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:26:53.903433  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:26:53.903502  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:26:53.942847  953268 cri.go:89] found id: ""
	I0224 13:26:53.942897  953268 logs.go:282] 0 containers: []
	W0224 13:26:53.942911  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:26:53.942920  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:26:53.942988  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:26:53.981640  953268 cri.go:89] found id: ""
	I0224 13:26:53.981671  953268 logs.go:282] 0 containers: []
	W0224 13:26:53.981680  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:26:53.981688  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:26:53.981762  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:26:54.021807  953268 cri.go:89] found id: ""
	I0224 13:26:54.021844  953268 logs.go:282] 0 containers: []
	W0224 13:26:54.021857  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:26:54.021866  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:26:54.021940  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:26:54.060251  953268 cri.go:89] found id: ""
	I0224 13:26:54.060285  953268 logs.go:282] 0 containers: []
	W0224 13:26:54.060294  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:26:54.060305  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:26:54.060317  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:26:54.111190  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:26:54.111235  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:26:54.125524  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:26:54.125558  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:26:54.205801  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:26:54.205837  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:26:54.205855  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:26:54.283358  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:26:54.283402  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:26:56.823991  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:26:56.838925  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:26:56.838995  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:26:56.880999  953268 cri.go:89] found id: ""
	I0224 13:26:56.881033  953268 logs.go:282] 0 containers: []
	W0224 13:26:56.881043  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:26:56.881049  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:26:56.881118  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:26:56.928595  953268 cri.go:89] found id: ""
	I0224 13:26:56.928626  953268 logs.go:282] 0 containers: []
	W0224 13:26:56.928635  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:26:56.928642  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:26:56.928701  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:26:56.970099  953268 cri.go:89] found id: ""
	I0224 13:26:56.970132  953268 logs.go:282] 0 containers: []
	W0224 13:26:56.970140  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:26:56.970147  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:26:56.970223  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:26:57.007276  953268 cri.go:89] found id: ""
	I0224 13:26:57.007313  953268 logs.go:282] 0 containers: []
	W0224 13:26:57.007323  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:26:57.007332  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:26:57.007393  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:26:57.047592  953268 cri.go:89] found id: ""
	I0224 13:26:57.047627  953268 logs.go:282] 0 containers: []
	W0224 13:26:57.047639  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:26:57.047646  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:26:57.047720  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:26:57.088041  953268 cri.go:89] found id: ""
	I0224 13:26:57.088085  953268 logs.go:282] 0 containers: []
	W0224 13:26:57.088096  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:26:57.088104  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:26:57.088174  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:26:57.126992  953268 cri.go:89] found id: ""
	I0224 13:26:57.127021  953268 logs.go:282] 0 containers: []
	W0224 13:26:57.127030  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:26:57.127036  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:26:57.127099  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:26:57.168196  953268 cri.go:89] found id: ""
	I0224 13:26:57.168227  953268 logs.go:282] 0 containers: []
	W0224 13:26:57.168237  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:26:57.168248  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:26:57.168261  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:26:57.218869  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:26:57.218913  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:26:57.233780  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:26:57.233813  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:26:57.310257  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:26:57.310288  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:26:57.310303  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:26:57.389045  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:26:57.389092  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:26:59.938120  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:26:59.955687  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:26:59.955764  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:26:59.995015  953268 cri.go:89] found id: ""
	I0224 13:26:59.995045  953268 logs.go:282] 0 containers: []
	W0224 13:26:59.995054  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:26:59.995060  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:26:59.995117  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:27:00.034256  953268 cri.go:89] found id: ""
	I0224 13:27:00.034296  953268 logs.go:282] 0 containers: []
	W0224 13:27:00.034308  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:27:00.034317  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:27:00.034399  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:27:00.073156  953268 cri.go:89] found id: ""
	I0224 13:27:00.073195  953268 logs.go:282] 0 containers: []
	W0224 13:27:00.073206  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:27:00.073215  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:27:00.073288  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:27:00.113050  953268 cri.go:89] found id: ""
	I0224 13:27:00.113085  953268 logs.go:282] 0 containers: []
	W0224 13:27:00.113094  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:27:00.113100  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:27:00.113167  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:27:00.155160  953268 cri.go:89] found id: ""
	I0224 13:27:00.155200  953268 logs.go:282] 0 containers: []
	W0224 13:27:00.155227  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:27:00.155237  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:27:00.155334  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:27:00.197481  953268 cri.go:89] found id: ""
	I0224 13:27:00.197518  953268 logs.go:282] 0 containers: []
	W0224 13:27:00.197531  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:27:00.197539  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:27:00.197620  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:27:00.237963  953268 cri.go:89] found id: ""
	I0224 13:27:00.237998  953268 logs.go:282] 0 containers: []
	W0224 13:27:00.238009  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:27:00.238017  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:27:00.238088  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:27:00.275407  953268 cri.go:89] found id: ""
	I0224 13:27:00.275440  953268 logs.go:282] 0 containers: []
	W0224 13:27:00.275450  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:27:00.275462  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:27:00.275474  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:27:00.315574  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:27:00.315608  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:27:00.372770  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:27:00.372822  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:27:00.388026  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:27:00.388062  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:27:00.465490  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:27:00.465522  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:27:00.465541  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:27:03.062180  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:27:03.078306  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:27:03.078379  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:27:03.119145  953268 cri.go:89] found id: ""
	I0224 13:27:03.119179  953268 logs.go:282] 0 containers: []
	W0224 13:27:03.119190  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:27:03.119197  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:27:03.119274  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:27:03.155766  953268 cri.go:89] found id: ""
	I0224 13:27:03.155822  953268 logs.go:282] 0 containers: []
	W0224 13:27:03.155836  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:27:03.155847  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:27:03.155918  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:27:03.196332  953268 cri.go:89] found id: ""
	I0224 13:27:03.196367  953268 logs.go:282] 0 containers: []
	W0224 13:27:03.196379  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:27:03.196384  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:27:03.196438  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:27:03.238442  953268 cri.go:89] found id: ""
	I0224 13:27:03.238487  953268 logs.go:282] 0 containers: []
	W0224 13:27:03.238498  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:27:03.238507  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:27:03.238589  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:27:03.282011  953268 cri.go:89] found id: ""
	I0224 13:27:03.282046  953268 logs.go:282] 0 containers: []
	W0224 13:27:03.282055  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:27:03.282062  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:27:03.282117  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:27:03.319789  953268 cri.go:89] found id: ""
	I0224 13:27:03.319827  953268 logs.go:282] 0 containers: []
	W0224 13:27:03.319839  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:27:03.319847  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:27:03.319916  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:27:03.360526  953268 cri.go:89] found id: ""
	I0224 13:27:03.360565  953268 logs.go:282] 0 containers: []
	W0224 13:27:03.360575  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:27:03.360582  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:27:03.360636  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:27:03.396771  953268 cri.go:89] found id: ""
	I0224 13:27:03.396811  953268 logs.go:282] 0 containers: []
	W0224 13:27:03.396822  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:27:03.396837  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:27:03.396856  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:27:03.468576  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:27:03.468608  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:27:03.468622  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:27:03.553834  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:27:03.553881  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:27:03.596795  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:27:03.596838  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:27:03.650347  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:27:03.650403  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:27:06.165134  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:27:06.182923  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:27:06.183012  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:27:06.227096  953268 cri.go:89] found id: ""
	I0224 13:27:06.227127  953268 logs.go:282] 0 containers: []
	W0224 13:27:06.227138  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:27:06.227150  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:27:06.227236  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:27:06.274187  953268 cri.go:89] found id: ""
	I0224 13:27:06.274221  953268 logs.go:282] 0 containers: []
	W0224 13:27:06.274233  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:27:06.274240  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:27:06.274324  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:27:06.313140  953268 cri.go:89] found id: ""
	I0224 13:27:06.313175  953268 logs.go:282] 0 containers: []
	W0224 13:27:06.313186  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:27:06.313192  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:27:06.313250  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:27:06.358006  953268 cri.go:89] found id: ""
	I0224 13:27:06.358042  953268 logs.go:282] 0 containers: []
	W0224 13:27:06.358054  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:27:06.358062  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:27:06.358132  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:27:06.397252  953268 cri.go:89] found id: ""
	I0224 13:27:06.397287  953268 logs.go:282] 0 containers: []
	W0224 13:27:06.397298  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:27:06.397332  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:27:06.397402  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:27:06.436263  953268 cri.go:89] found id: ""
	I0224 13:27:06.436296  953268 logs.go:282] 0 containers: []
	W0224 13:27:06.436306  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:27:06.436323  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:27:06.436394  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:27:06.479724  953268 cri.go:89] found id: ""
	I0224 13:27:06.479767  953268 logs.go:282] 0 containers: []
	W0224 13:27:06.479781  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:27:06.479789  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:27:06.479869  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:27:06.518870  953268 cri.go:89] found id: ""
	I0224 13:27:06.518901  953268 logs.go:282] 0 containers: []
	W0224 13:27:06.518912  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:27:06.518932  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:27:06.518951  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:27:06.583478  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:27:06.583521  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:27:06.602098  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:27:06.602146  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:27:06.684036  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:27:06.684063  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:27:06.684076  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:27:06.773375  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:27:06.773417  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:27:09.324369  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:27:09.343105  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:27:09.343214  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:27:09.392123  953268 cri.go:89] found id: ""
	I0224 13:27:09.392167  953268 logs.go:282] 0 containers: []
	W0224 13:27:09.392179  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:27:09.392188  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:27:09.392256  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:27:09.444373  953268 cri.go:89] found id: ""
	I0224 13:27:09.444401  953268 logs.go:282] 0 containers: []
	W0224 13:27:09.444409  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:27:09.444415  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:27:09.444476  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:27:09.490476  953268 cri.go:89] found id: ""
	I0224 13:27:09.490519  953268 logs.go:282] 0 containers: []
	W0224 13:27:09.490532  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:27:09.490542  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:27:09.490617  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:27:09.527427  953268 cri.go:89] found id: ""
	I0224 13:27:09.527456  953268 logs.go:282] 0 containers: []
	W0224 13:27:09.527464  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:27:09.527470  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:27:09.527523  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:27:09.567277  953268 cri.go:89] found id: ""
	I0224 13:27:09.567308  953268 logs.go:282] 0 containers: []
	W0224 13:27:09.567317  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:27:09.567322  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:27:09.567375  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:27:09.607123  953268 cri.go:89] found id: ""
	I0224 13:27:09.607158  953268 logs.go:282] 0 containers: []
	W0224 13:27:09.607176  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:27:09.607185  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:27:09.607257  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:27:09.652733  953268 cri.go:89] found id: ""
	I0224 13:27:09.652767  953268 logs.go:282] 0 containers: []
	W0224 13:27:09.652778  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:27:09.652786  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:27:09.652861  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:27:09.695316  953268 cri.go:89] found id: ""
	I0224 13:27:09.695354  953268 logs.go:282] 0 containers: []
	W0224 13:27:09.695367  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:27:09.695380  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:27:09.695399  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:27:09.738443  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:27:09.738476  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:27:09.791305  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:27:09.791351  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:27:09.807825  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:27:09.807866  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:27:09.879245  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:27:09.879269  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:27:09.879282  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:27:12.483492  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:27:12.498268  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:27:12.498362  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:27:12.538172  953268 cri.go:89] found id: ""
	I0224 13:27:12.538216  953268 logs.go:282] 0 containers: []
	W0224 13:27:12.538229  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:27:12.538237  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:27:12.538297  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:27:12.578220  953268 cri.go:89] found id: ""
	I0224 13:27:12.578257  953268 logs.go:282] 0 containers: []
	W0224 13:27:12.578269  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:27:12.578277  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:27:12.578348  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:27:12.621573  953268 cri.go:89] found id: ""
	I0224 13:27:12.621606  953268 logs.go:282] 0 containers: []
	W0224 13:27:12.621614  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:27:12.621625  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:27:12.621692  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:27:12.667344  953268 cri.go:89] found id: ""
	I0224 13:27:12.667375  953268 logs.go:282] 0 containers: []
	W0224 13:27:12.667385  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:27:12.667393  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:27:12.667474  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:27:12.707681  953268 cri.go:89] found id: ""
	I0224 13:27:12.707717  953268 logs.go:282] 0 containers: []
	W0224 13:27:12.707728  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:27:12.707736  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:27:12.707808  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:27:12.745219  953268 cri.go:89] found id: ""
	I0224 13:27:12.745246  953268 logs.go:282] 0 containers: []
	W0224 13:27:12.745259  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:27:12.745268  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:27:12.745357  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:27:12.789412  953268 cri.go:89] found id: ""
	I0224 13:27:12.789445  953268 logs.go:282] 0 containers: []
	W0224 13:27:12.789457  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:27:12.789472  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:27:12.789544  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:27:12.840505  953268 cri.go:89] found id: ""
	I0224 13:27:12.840538  953268 logs.go:282] 0 containers: []
	W0224 13:27:12.840575  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:27:12.840592  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:27:12.840616  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:27:12.913027  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:27:12.913064  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:27:12.932023  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:27:12.932056  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:27:13.035790  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:27:13.035821  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:27:13.035840  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:27:13.168676  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:27:13.168732  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:27:15.759716  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:27:15.777792  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:27:15.777881  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:27:15.820830  953268 cri.go:89] found id: ""
	I0224 13:27:15.820863  953268 logs.go:282] 0 containers: []
	W0224 13:27:15.820874  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:27:15.820882  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:27:15.820946  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:27:15.862575  953268 cri.go:89] found id: ""
	I0224 13:27:15.862608  953268 logs.go:282] 0 containers: []
	W0224 13:27:15.862620  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:27:15.862627  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:27:15.862692  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:27:15.900386  953268 cri.go:89] found id: ""
	I0224 13:27:15.900426  953268 logs.go:282] 0 containers: []
	W0224 13:27:15.900440  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:27:15.900447  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:27:15.900514  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:27:15.941411  953268 cri.go:89] found id: ""
	I0224 13:27:15.941451  953268 logs.go:282] 0 containers: []
	W0224 13:27:15.941463  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:27:15.941471  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:27:15.941537  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:27:15.979848  953268 cri.go:89] found id: ""
	I0224 13:27:15.979880  953268 logs.go:282] 0 containers: []
	W0224 13:27:15.979893  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:27:15.979900  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:27:15.979975  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:27:16.023437  953268 cri.go:89] found id: ""
	I0224 13:27:16.023476  953268 logs.go:282] 0 containers: []
	W0224 13:27:16.023489  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:27:16.023498  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:27:16.023586  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:27:16.069105  953268 cri.go:89] found id: ""
	I0224 13:27:16.069138  953268 logs.go:282] 0 containers: []
	W0224 13:27:16.069151  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:27:16.069159  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:27:16.069232  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:27:16.115694  953268 cri.go:89] found id: ""
	I0224 13:27:16.115729  953268 logs.go:282] 0 containers: []
	W0224 13:27:16.115740  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:27:16.115753  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:27:16.115774  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:27:16.174397  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:27:16.174439  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:27:16.193478  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:27:16.193522  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:27:16.292005  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:27:16.292037  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:27:16.292059  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:27:16.423864  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:27:16.423941  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:27:18.993497  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:27:19.013908  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:27:19.013986  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:27:19.071563  953268 cri.go:89] found id: ""
	I0224 13:27:19.071599  953268 logs.go:282] 0 containers: []
	W0224 13:27:19.071613  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:27:19.071622  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:27:19.071684  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:27:19.122617  953268 cri.go:89] found id: ""
	I0224 13:27:19.122654  953268 logs.go:282] 0 containers: []
	W0224 13:27:19.122666  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:27:19.122674  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:27:19.122753  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:27:19.171846  953268 cri.go:89] found id: ""
	I0224 13:27:19.171878  953268 logs.go:282] 0 containers: []
	W0224 13:27:19.171892  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:27:19.171899  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:27:19.171972  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:27:19.241429  953268 cri.go:89] found id: ""
	I0224 13:27:19.241470  953268 logs.go:282] 0 containers: []
	W0224 13:27:19.241484  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:27:19.241494  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:27:19.241566  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:27:19.296792  953268 cri.go:89] found id: ""
	I0224 13:27:19.296836  953268 logs.go:282] 0 containers: []
	W0224 13:27:19.296851  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:27:19.296861  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:27:19.296940  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:27:19.348050  953268 cri.go:89] found id: ""
	I0224 13:27:19.348089  953268 logs.go:282] 0 containers: []
	W0224 13:27:19.348101  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:27:19.348109  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:27:19.348179  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:27:19.400685  953268 cri.go:89] found id: ""
	I0224 13:27:19.400716  953268 logs.go:282] 0 containers: []
	W0224 13:27:19.400727  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:27:19.400736  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:27:19.400806  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:27:19.451246  953268 cri.go:89] found id: ""
	I0224 13:27:19.451280  953268 logs.go:282] 0 containers: []
	W0224 13:27:19.451291  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:27:19.451304  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:27:19.451322  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:27:19.510964  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:27:19.511015  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:27:19.529666  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:27:19.529720  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:27:19.635844  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:27:19.635872  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:27:19.635890  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:27:19.745053  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:27:19.745113  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:27:22.313463  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:27:22.329875  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:27:22.329962  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:27:22.379562  953268 cri.go:89] found id: ""
	I0224 13:27:22.379600  953268 logs.go:282] 0 containers: []
	W0224 13:27:22.379613  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:27:22.379622  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:27:22.379693  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:27:22.433814  953268 cri.go:89] found id: ""
	I0224 13:27:22.433849  953268 logs.go:282] 0 containers: []
	W0224 13:27:22.433861  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:27:22.433870  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:27:22.433938  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:27:22.492882  953268 cri.go:89] found id: ""
	I0224 13:27:22.492916  953268 logs.go:282] 0 containers: []
	W0224 13:27:22.492929  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:27:22.492937  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:27:22.493015  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:27:22.534060  953268 cri.go:89] found id: ""
	I0224 13:27:22.534090  953268 logs.go:282] 0 containers: []
	W0224 13:27:22.534111  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:27:22.534121  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:27:22.534197  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:27:22.574577  953268 cri.go:89] found id: ""
	I0224 13:27:22.574613  953268 logs.go:282] 0 containers: []
	W0224 13:27:22.574625  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:27:22.574633  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:27:22.574712  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:27:22.616121  953268 cri.go:89] found id: ""
	I0224 13:27:22.616166  953268 logs.go:282] 0 containers: []
	W0224 13:27:22.616180  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:27:22.616189  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:27:22.616259  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:27:22.656657  953268 cri.go:89] found id: ""
	I0224 13:27:22.656694  953268 logs.go:282] 0 containers: []
	W0224 13:27:22.656705  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:27:22.656713  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:27:22.656781  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:27:22.700032  953268 cri.go:89] found id: ""
	I0224 13:27:22.700067  953268 logs.go:282] 0 containers: []
	W0224 13:27:22.700079  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:27:22.700091  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:27:22.700107  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:27:22.767177  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:27:22.767224  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:27:22.782428  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:27:22.782460  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:27:22.863373  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:27:22.863403  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:27:22.863419  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:27:22.945484  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:27:22.945531  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:27:25.503478  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:27:25.522715  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:27:25.522883  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:27:25.570811  953268 cri.go:89] found id: ""
	I0224 13:27:25.570851  953268 logs.go:282] 0 containers: []
	W0224 13:27:25.570863  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:27:25.570871  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:27:25.570945  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:27:25.614367  953268 cri.go:89] found id: ""
	I0224 13:27:25.614402  953268 logs.go:282] 0 containers: []
	W0224 13:27:25.614415  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:27:25.614424  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:27:25.614509  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:27:25.664328  953268 cri.go:89] found id: ""
	I0224 13:27:25.664356  953268 logs.go:282] 0 containers: []
	W0224 13:27:25.664365  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:27:25.664371  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:27:25.664424  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:27:25.718143  953268 cri.go:89] found id: ""
	I0224 13:27:25.718177  953268 logs.go:282] 0 containers: []
	W0224 13:27:25.718189  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:27:25.718197  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:27:25.718264  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:27:25.765611  953268 cri.go:89] found id: ""
	I0224 13:27:25.765648  953268 logs.go:282] 0 containers: []
	W0224 13:27:25.765659  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:27:25.765668  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:27:25.765740  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:27:25.813453  953268 cri.go:89] found id: ""
	I0224 13:27:25.813493  953268 logs.go:282] 0 containers: []
	W0224 13:27:25.813505  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:27:25.813514  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:27:25.813590  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:27:25.866709  953268 cri.go:89] found id: ""
	I0224 13:27:25.866746  953268 logs.go:282] 0 containers: []
	W0224 13:27:25.866758  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:27:25.866767  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:27:25.866838  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:27:25.931127  953268 cri.go:89] found id: ""
	I0224 13:27:25.931162  953268 logs.go:282] 0 containers: []
	W0224 13:27:25.931174  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:27:25.931187  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:27:25.931214  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:27:26.062970  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:27:26.063006  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:27:26.082732  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:27:26.082780  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:27:26.190724  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:27:26.190757  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:27:26.190775  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:27:26.308151  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:27:26.308214  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:27:28.859499  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:27:28.879034  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:27:28.879130  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:27:28.935473  953268 cri.go:89] found id: ""
	I0224 13:27:28.935506  953268 logs.go:282] 0 containers: []
	W0224 13:27:28.935517  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:27:28.935526  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:27:28.935597  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:27:28.974875  953268 cri.go:89] found id: ""
	I0224 13:27:28.974913  953268 logs.go:282] 0 containers: []
	W0224 13:27:28.974923  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:27:28.974934  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:27:28.975039  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:27:29.019126  953268 cri.go:89] found id: ""
	I0224 13:27:29.019162  953268 logs.go:282] 0 containers: []
	W0224 13:27:29.019173  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:27:29.019180  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:27:29.019255  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:27:29.060109  953268 cri.go:89] found id: ""
	I0224 13:27:29.060147  953268 logs.go:282] 0 containers: []
	W0224 13:27:29.060156  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:27:29.060164  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:27:29.060237  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:27:29.102004  953268 cri.go:89] found id: ""
	I0224 13:27:29.102037  953268 logs.go:282] 0 containers: []
	W0224 13:27:29.102046  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:27:29.102052  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:27:29.102124  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:27:29.145793  953268 cri.go:89] found id: ""
	I0224 13:27:29.145827  953268 logs.go:282] 0 containers: []
	W0224 13:27:29.145839  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:27:29.145847  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:27:29.145913  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:27:29.187770  953268 cri.go:89] found id: ""
	I0224 13:27:29.187807  953268 logs.go:282] 0 containers: []
	W0224 13:27:29.187830  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:27:29.187839  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:27:29.187906  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:27:29.232363  953268 cri.go:89] found id: ""
	I0224 13:27:29.232399  953268 logs.go:282] 0 containers: []
	W0224 13:27:29.232413  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:27:29.232426  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:27:29.232447  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:27:29.328377  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:27:29.328486  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:27:29.328521  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:27:29.432683  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:27:29.432725  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:27:29.492894  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:27:29.492945  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:27:29.560346  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:27:29.560395  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:27:32.078582  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:27:32.098098  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:27:32.098229  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:27:32.137354  953268 cri.go:89] found id: ""
	I0224 13:27:32.137392  953268 logs.go:282] 0 containers: []
	W0224 13:27:32.137405  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:27:32.137414  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:27:32.137495  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:27:32.178920  953268 cri.go:89] found id: ""
	I0224 13:27:32.178952  953268 logs.go:282] 0 containers: []
	W0224 13:27:32.178961  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:27:32.178967  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:27:32.179034  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:27:32.221176  953268 cri.go:89] found id: ""
	I0224 13:27:32.221212  953268 logs.go:282] 0 containers: []
	W0224 13:27:32.221225  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:27:32.221233  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:27:32.221331  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:27:32.276680  953268 cri.go:89] found id: ""
	I0224 13:27:32.276717  953268 logs.go:282] 0 containers: []
	W0224 13:27:32.276731  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:27:32.276739  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:27:32.276809  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:27:32.325405  953268 cri.go:89] found id: ""
	I0224 13:27:32.325453  953268 logs.go:282] 0 containers: []
	W0224 13:27:32.325467  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:27:32.325477  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:27:32.325554  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:27:32.376446  953268 cri.go:89] found id: ""
	I0224 13:27:32.376507  953268 logs.go:282] 0 containers: []
	W0224 13:27:32.376520  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:27:32.376530  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:27:32.376610  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:27:32.447821  953268 cri.go:89] found id: ""
	I0224 13:27:32.447858  953268 logs.go:282] 0 containers: []
	W0224 13:27:32.447873  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:27:32.447883  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:27:32.447958  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:27:32.496235  953268 cri.go:89] found id: ""
	I0224 13:27:32.496275  953268 logs.go:282] 0 containers: []
	W0224 13:27:32.496288  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:27:32.496302  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:27:32.496319  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:27:32.545362  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:27:32.545402  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:27:32.606586  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:27:32.606633  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:27:32.626859  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:27:32.626971  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:27:32.719253  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:27:32.719281  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:27:32.719298  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:27:35.301807  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:27:35.317675  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:27:35.317756  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:27:35.365337  953268 cri.go:89] found id: ""
	I0224 13:27:35.365369  953268 logs.go:282] 0 containers: []
	W0224 13:27:35.365380  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:27:35.365389  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:27:35.365461  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:27:35.408024  953268 cri.go:89] found id: ""
	I0224 13:27:35.408050  953268 logs.go:282] 0 containers: []
	W0224 13:27:35.408058  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:27:35.408064  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:27:35.408118  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:27:35.464116  953268 cri.go:89] found id: ""
	I0224 13:27:35.464155  953268 logs.go:282] 0 containers: []
	W0224 13:27:35.464167  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:27:35.464174  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:27:35.464240  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:27:35.517911  953268 cri.go:89] found id: ""
	I0224 13:27:35.517948  953268 logs.go:282] 0 containers: []
	W0224 13:27:35.517959  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:27:35.517969  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:27:35.518036  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:27:35.562010  953268 cri.go:89] found id: ""
	I0224 13:27:35.562040  953268 logs.go:282] 0 containers: []
	W0224 13:27:35.562049  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:27:35.562055  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:27:35.562108  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:27:35.605113  953268 cri.go:89] found id: ""
	I0224 13:27:35.605161  953268 logs.go:282] 0 containers: []
	W0224 13:27:35.605176  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:27:35.605187  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:27:35.605260  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:27:35.654843  953268 cri.go:89] found id: ""
	I0224 13:27:35.654884  953268 logs.go:282] 0 containers: []
	W0224 13:27:35.654898  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:27:35.654906  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:27:35.654981  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:27:35.700885  953268 cri.go:89] found id: ""
	I0224 13:27:35.700918  953268 logs.go:282] 0 containers: []
	W0224 13:27:35.700930  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:27:35.700944  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:27:35.700961  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:27:35.718088  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:27:35.718129  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:27:35.791506  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:27:35.791530  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:27:35.791552  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:27:35.867928  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:27:35.867982  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:27:35.917191  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:27:35.917226  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:27:38.486953  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:27:38.507137  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:27:38.507205  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:27:38.551071  953268 cri.go:89] found id: ""
	I0224 13:27:38.551103  953268 logs.go:282] 0 containers: []
	W0224 13:27:38.551115  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:27:38.551122  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:27:38.551186  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:27:38.605564  953268 cri.go:89] found id: ""
	I0224 13:27:38.605597  953268 logs.go:282] 0 containers: []
	W0224 13:27:38.605608  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:27:38.605617  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:27:38.605678  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:27:38.648405  953268 cri.go:89] found id: ""
	I0224 13:27:38.648449  953268 logs.go:282] 0 containers: []
	W0224 13:27:38.648461  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:27:38.648469  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:27:38.648539  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:27:38.688909  953268 cri.go:89] found id: ""
	I0224 13:27:38.688948  953268 logs.go:282] 0 containers: []
	W0224 13:27:38.688960  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:27:38.688968  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:27:38.689034  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:27:38.727955  953268 cri.go:89] found id: ""
	I0224 13:27:38.727988  953268 logs.go:282] 0 containers: []
	W0224 13:27:38.728000  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:27:38.728009  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:27:38.728071  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:27:38.764545  953268 cri.go:89] found id: ""
	I0224 13:27:38.764583  953268 logs.go:282] 0 containers: []
	W0224 13:27:38.764604  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:27:38.764612  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:27:38.764696  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:27:38.808011  953268 cri.go:89] found id: ""
	I0224 13:27:38.808047  953268 logs.go:282] 0 containers: []
	W0224 13:27:38.808057  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:27:38.808065  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:27:38.808132  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:27:38.850876  953268 cri.go:89] found id: ""
	I0224 13:27:38.850903  953268 logs.go:282] 0 containers: []
	W0224 13:27:38.850912  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:27:38.850923  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:27:38.850937  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:27:38.906926  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:27:38.906966  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:27:38.924310  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:27:38.924348  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:27:39.009208  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:27:39.009241  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:27:39.009260  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:27:39.092115  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:27:39.092166  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:27:41.648147  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:27:41.667461  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:27:41.667551  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:27:41.710255  953268 cri.go:89] found id: ""
	I0224 13:27:41.710290  953268 logs.go:282] 0 containers: []
	W0224 13:27:41.710299  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:27:41.710305  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:27:41.710367  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:27:41.765429  953268 cri.go:89] found id: ""
	I0224 13:27:41.765483  953268 logs.go:282] 0 containers: []
	W0224 13:27:41.765497  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:27:41.765506  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:27:41.765582  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:27:41.813336  953268 cri.go:89] found id: ""
	I0224 13:27:41.813373  953268 logs.go:282] 0 containers: []
	W0224 13:27:41.813385  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:27:41.813395  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:27:41.813465  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:27:41.857721  953268 cri.go:89] found id: ""
	I0224 13:27:41.857749  953268 logs.go:282] 0 containers: []
	W0224 13:27:41.857759  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:27:41.857765  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:27:41.857821  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:27:41.901662  953268 cri.go:89] found id: ""
	I0224 13:27:41.901696  953268 logs.go:282] 0 containers: []
	W0224 13:27:41.901709  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:27:41.901717  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:27:41.901804  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:27:41.948841  953268 cri.go:89] found id: ""
	I0224 13:27:41.948868  953268 logs.go:282] 0 containers: []
	W0224 13:27:41.948886  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:27:41.948892  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:27:41.948948  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:27:42.005127  953268 cri.go:89] found id: ""
	I0224 13:27:42.005158  953268 logs.go:282] 0 containers: []
	W0224 13:27:42.005167  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:27:42.005175  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:27:42.005246  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:27:42.049796  953268 cri.go:89] found id: ""
	I0224 13:27:42.049828  953268 logs.go:282] 0 containers: []
	W0224 13:27:42.049840  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:27:42.049855  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:27:42.049871  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:27:42.119593  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:27:42.119641  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:27:42.137071  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:27:42.137109  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:27:42.235623  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:27:42.235657  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:27:42.235725  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:27:42.358797  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:27:42.358854  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:27:44.912963  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:27:44.928397  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:27:44.928506  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:27:44.970937  953268 cri.go:89] found id: ""
	I0224 13:27:44.970978  953268 logs.go:282] 0 containers: []
	W0224 13:27:44.970992  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:27:44.971000  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:27:44.971062  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:27:45.007679  953268 cri.go:89] found id: ""
	I0224 13:27:45.007717  953268 logs.go:282] 0 containers: []
	W0224 13:27:45.007728  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:27:45.007736  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:27:45.007791  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:27:45.044057  953268 cri.go:89] found id: ""
	I0224 13:27:45.044088  953268 logs.go:282] 0 containers: []
	W0224 13:27:45.044099  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:27:45.044107  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:27:45.044176  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:27:45.079661  953268 cri.go:89] found id: ""
	I0224 13:27:45.079702  953268 logs.go:282] 0 containers: []
	W0224 13:27:45.079715  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:27:45.079723  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:27:45.079791  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:27:45.115753  953268 cri.go:89] found id: ""
	I0224 13:27:45.115799  953268 logs.go:282] 0 containers: []
	W0224 13:27:45.115812  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:27:45.115824  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:27:45.115899  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:27:45.154510  953268 cri.go:89] found id: ""
	I0224 13:27:45.154549  953268 logs.go:282] 0 containers: []
	W0224 13:27:45.154561  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:27:45.154570  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:27:45.154645  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:27:45.195918  953268 cri.go:89] found id: ""
	I0224 13:27:45.195953  953268 logs.go:282] 0 containers: []
	W0224 13:27:45.195966  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:27:45.195981  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:27:45.196052  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:27:45.237880  953268 cri.go:89] found id: ""
	I0224 13:27:45.237912  953268 logs.go:282] 0 containers: []
	W0224 13:27:45.237921  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:27:45.237934  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:27:45.237945  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:27:45.294234  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:27:45.294289  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:27:45.310639  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:27:45.310672  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:27:45.389194  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:27:45.389229  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:27:45.389244  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:27:45.465807  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:27:45.465851  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:27:48.022241  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:27:48.036984  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:27:48.037090  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:27:48.074965  953268 cri.go:89] found id: ""
	I0224 13:27:48.074994  953268 logs.go:282] 0 containers: []
	W0224 13:27:48.075003  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:27:48.075008  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:27:48.075076  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:27:48.117517  953268 cri.go:89] found id: ""
	I0224 13:27:48.117548  953268 logs.go:282] 0 containers: []
	W0224 13:27:48.117558  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:27:48.117566  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:27:48.117625  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:27:48.162626  953268 cri.go:89] found id: ""
	I0224 13:27:48.162661  953268 logs.go:282] 0 containers: []
	W0224 13:27:48.162672  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:27:48.162680  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:27:48.162747  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:27:48.208868  953268 cri.go:89] found id: ""
	I0224 13:27:48.208898  953268 logs.go:282] 0 containers: []
	W0224 13:27:48.208910  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:27:48.208919  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:27:48.208990  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:27:48.256428  953268 cri.go:89] found id: ""
	I0224 13:27:48.256464  953268 logs.go:282] 0 containers: []
	W0224 13:27:48.256477  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:27:48.256491  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:27:48.256588  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:27:48.297395  953268 cri.go:89] found id: ""
	I0224 13:27:48.297430  953268 logs.go:282] 0 containers: []
	W0224 13:27:48.297442  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:27:48.297451  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:27:48.297519  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:27:48.343573  953268 cri.go:89] found id: ""
	I0224 13:27:48.343604  953268 logs.go:282] 0 containers: []
	W0224 13:27:48.343615  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:27:48.343623  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:27:48.343687  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:27:48.391951  953268 cri.go:89] found id: ""
	I0224 13:27:48.391995  953268 logs.go:282] 0 containers: []
	W0224 13:27:48.392011  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:27:48.392045  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:27:48.392061  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:27:48.412527  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:27:48.412564  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:27:48.513431  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:27:48.513455  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:27:48.513475  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:27:48.597745  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:27:48.597791  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:27:48.651113  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:27:48.651159  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:27:51.229674  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:27:51.243546  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:27:51.243627  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:27:51.287771  953268 cri.go:89] found id: ""
	I0224 13:27:51.287835  953268 logs.go:282] 0 containers: []
	W0224 13:27:51.287848  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:27:51.287857  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:27:51.287929  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:27:51.334574  953268 cri.go:89] found id: ""
	I0224 13:27:51.334608  953268 logs.go:282] 0 containers: []
	W0224 13:27:51.334621  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:27:51.334629  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:27:51.334706  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:27:51.374813  953268 cri.go:89] found id: ""
	I0224 13:27:51.374850  953268 logs.go:282] 0 containers: []
	W0224 13:27:51.374863  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:27:51.374871  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:27:51.374950  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:27:51.418693  953268 cri.go:89] found id: ""
	I0224 13:27:51.418727  953268 logs.go:282] 0 containers: []
	W0224 13:27:51.418740  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:27:51.418748  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:27:51.418810  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:27:51.458425  953268 cri.go:89] found id: ""
	I0224 13:27:51.458458  953268 logs.go:282] 0 containers: []
	W0224 13:27:51.458472  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:27:51.458480  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:27:51.458569  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:27:51.497964  953268 cri.go:89] found id: ""
	I0224 13:27:51.497998  953268 logs.go:282] 0 containers: []
	W0224 13:27:51.498010  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:27:51.498018  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:27:51.498091  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:27:51.540445  953268 cri.go:89] found id: ""
	I0224 13:27:51.540482  953268 logs.go:282] 0 containers: []
	W0224 13:27:51.540495  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:27:51.540504  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:27:51.540571  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:27:51.581922  953268 cri.go:89] found id: ""
	I0224 13:27:51.581962  953268 logs.go:282] 0 containers: []
	W0224 13:27:51.581975  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:27:51.581989  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:27:51.582004  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:27:51.640650  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:27:51.640709  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:27:51.659220  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:27:51.659264  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:27:51.761113  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:27:51.761144  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:27:51.761163  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:27:51.883037  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:27:51.883176  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:27:54.454714  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:27:54.470606  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:27:54.470704  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:27:54.512775  953268 cri.go:89] found id: ""
	I0224 13:27:54.512806  953268 logs.go:282] 0 containers: []
	W0224 13:27:54.512817  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:27:54.512825  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:27:54.512882  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:27:54.555128  953268 cri.go:89] found id: ""
	I0224 13:27:54.555155  953268 logs.go:282] 0 containers: []
	W0224 13:27:54.555166  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:27:54.555177  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:27:54.555240  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:27:54.596836  953268 cri.go:89] found id: ""
	I0224 13:27:54.596868  953268 logs.go:282] 0 containers: []
	W0224 13:27:54.596879  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:27:54.596887  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:27:54.596950  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:27:54.637414  953268 cri.go:89] found id: ""
	I0224 13:27:54.637452  953268 logs.go:282] 0 containers: []
	W0224 13:27:54.637464  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:27:54.637472  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:27:54.637539  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:27:54.687199  953268 cri.go:89] found id: ""
	I0224 13:27:54.687247  953268 logs.go:282] 0 containers: []
	W0224 13:27:54.687259  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:27:54.687267  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:27:54.687328  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:27:54.728586  953268 cri.go:89] found id: ""
	I0224 13:27:54.728626  953268 logs.go:282] 0 containers: []
	W0224 13:27:54.728638  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:27:54.728647  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:27:54.728720  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:27:54.786157  953268 cri.go:89] found id: ""
	I0224 13:27:54.786195  953268 logs.go:282] 0 containers: []
	W0224 13:27:54.786208  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:27:54.786237  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:27:54.786325  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:27:54.824477  953268 cri.go:89] found id: ""
	I0224 13:27:54.824518  953268 logs.go:282] 0 containers: []
	W0224 13:27:54.824531  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:27:54.824546  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:27:54.824565  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:27:54.875833  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:27:54.875883  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:27:54.890100  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:27:54.890136  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:27:54.986323  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:27:54.986409  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:27:54.986432  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:27:55.092281  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:27:55.092320  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:27:57.643410  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:27:57.658273  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:27:57.658354  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:27:57.698180  953268 cri.go:89] found id: ""
	I0224 13:27:57.698205  953268 logs.go:282] 0 containers: []
	W0224 13:27:57.698214  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:27:57.698220  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:27:57.698279  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:27:57.734998  953268 cri.go:89] found id: ""
	I0224 13:27:57.735034  953268 logs.go:282] 0 containers: []
	W0224 13:27:57.735042  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:27:57.735048  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:27:57.735104  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:27:57.773818  953268 cri.go:89] found id: ""
	I0224 13:27:57.773857  953268 logs.go:282] 0 containers: []
	W0224 13:27:57.773869  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:27:57.773877  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:27:57.773996  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:27:57.812646  953268 cri.go:89] found id: ""
	I0224 13:27:57.812678  953268 logs.go:282] 0 containers: []
	W0224 13:27:57.812691  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:27:57.812697  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:27:57.812753  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:27:57.848396  953268 cri.go:89] found id: ""
	I0224 13:27:57.848429  953268 logs.go:282] 0 containers: []
	W0224 13:27:57.848439  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:27:57.848444  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:27:57.848509  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:27:57.901425  953268 cri.go:89] found id: ""
	I0224 13:27:57.901453  953268 logs.go:282] 0 containers: []
	W0224 13:27:57.901470  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:27:57.901477  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:27:57.901538  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:27:57.939343  953268 cri.go:89] found id: ""
	I0224 13:27:57.939383  953268 logs.go:282] 0 containers: []
	W0224 13:27:57.939393  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:27:57.939400  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:27:57.939462  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:27:57.979880  953268 cri.go:89] found id: ""
	I0224 13:27:57.979912  953268 logs.go:282] 0 containers: []
	W0224 13:27:57.979921  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:27:57.979931  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:27:57.979943  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:27:58.054652  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:27:58.054705  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:27:58.096229  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:27:58.096266  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:27:58.158757  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:27:58.158805  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:27:58.174143  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:27:58.174180  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:27:58.254767  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:28:00.755128  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:28:00.769041  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:28:00.769110  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:28:00.808863  953268 cri.go:89] found id: ""
	I0224 13:28:00.808900  953268 logs.go:282] 0 containers: []
	W0224 13:28:00.808913  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:28:00.808921  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:28:00.808991  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:28:00.852603  953268 cri.go:89] found id: ""
	I0224 13:28:00.852633  953268 logs.go:282] 0 containers: []
	W0224 13:28:00.852644  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:28:00.852652  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:28:00.852719  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:28:00.901952  953268 cri.go:89] found id: ""
	I0224 13:28:00.901980  953268 logs.go:282] 0 containers: []
	W0224 13:28:00.901989  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:28:00.901994  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:28:00.902057  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:28:00.944366  953268 cri.go:89] found id: ""
	I0224 13:28:00.944393  953268 logs.go:282] 0 containers: []
	W0224 13:28:00.944405  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:28:00.944411  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:28:00.944465  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:28:00.980986  953268 cri.go:89] found id: ""
	I0224 13:28:00.981013  953268 logs.go:282] 0 containers: []
	W0224 13:28:00.981023  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:28:00.981029  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:28:00.981082  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:28:01.017886  953268 cri.go:89] found id: ""
	I0224 13:28:01.017918  953268 logs.go:282] 0 containers: []
	W0224 13:28:01.017927  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:28:01.017933  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:28:01.017990  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:28:01.054672  953268 cri.go:89] found id: ""
	I0224 13:28:01.054710  953268 logs.go:282] 0 containers: []
	W0224 13:28:01.054720  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:28:01.054728  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:28:01.054793  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:28:01.090949  953268 cri.go:89] found id: ""
	I0224 13:28:01.090980  953268 logs.go:282] 0 containers: []
	W0224 13:28:01.090989  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:28:01.091000  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:28:01.091020  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:28:01.163951  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:28:01.163972  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:28:01.163985  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:28:01.244051  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:28:01.244099  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:28:01.285789  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:28:01.285825  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:28:01.337609  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:28:01.337653  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:28:03.861474  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:28:03.875964  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:28:03.876053  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:28:03.912707  953268 cri.go:89] found id: ""
	I0224 13:28:03.912743  953268 logs.go:282] 0 containers: []
	W0224 13:28:03.912752  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:28:03.912758  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:28:03.912812  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:28:03.954647  953268 cri.go:89] found id: ""
	I0224 13:28:03.954680  953268 logs.go:282] 0 containers: []
	W0224 13:28:03.954692  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:28:03.954700  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:28:03.954752  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:28:03.995369  953268 cri.go:89] found id: ""
	I0224 13:28:03.995404  953268 logs.go:282] 0 containers: []
	W0224 13:28:03.995416  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:28:03.995423  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:28:03.995493  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:28:04.033552  953268 cri.go:89] found id: ""
	I0224 13:28:04.033590  953268 logs.go:282] 0 containers: []
	W0224 13:28:04.033602  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:28:04.033611  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:28:04.033675  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:28:04.071696  953268 cri.go:89] found id: ""
	I0224 13:28:04.071727  953268 logs.go:282] 0 containers: []
	W0224 13:28:04.071740  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:28:04.071746  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:28:04.071811  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:28:04.114770  953268 cri.go:89] found id: ""
	I0224 13:28:04.114812  953268 logs.go:282] 0 containers: []
	W0224 13:28:04.114824  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:28:04.114833  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:28:04.114900  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:28:04.157597  953268 cri.go:89] found id: ""
	I0224 13:28:04.157628  953268 logs.go:282] 0 containers: []
	W0224 13:28:04.157639  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:28:04.157647  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:28:04.157718  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:28:04.196632  953268 cri.go:89] found id: ""
	I0224 13:28:04.196667  953268 logs.go:282] 0 containers: []
	W0224 13:28:04.196679  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:28:04.196694  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:28:04.196709  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:28:04.274424  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:28:04.274464  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:28:04.274482  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:28:04.353282  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:28:04.353335  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:28:04.400406  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:28:04.400439  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:28:04.451657  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:28:04.451700  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:28:06.967901  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:28:06.981520  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:28:06.981611  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:28:07.028168  953268 cri.go:89] found id: ""
	I0224 13:28:07.028219  953268 logs.go:282] 0 containers: []
	W0224 13:28:07.028231  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:28:07.028240  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:28:07.028310  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:28:07.065580  953268 cri.go:89] found id: ""
	I0224 13:28:07.065613  953268 logs.go:282] 0 containers: []
	W0224 13:28:07.065625  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:28:07.065634  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:28:07.065700  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:28:07.104755  953268 cri.go:89] found id: ""
	I0224 13:28:07.104788  953268 logs.go:282] 0 containers: []
	W0224 13:28:07.104799  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:28:07.104806  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:28:07.104870  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:28:07.171664  953268 cri.go:89] found id: ""
	I0224 13:28:07.171698  953268 logs.go:282] 0 containers: []
	W0224 13:28:07.171708  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:28:07.171714  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:28:07.171776  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:28:07.221178  953268 cri.go:89] found id: ""
	I0224 13:28:07.221220  953268 logs.go:282] 0 containers: []
	W0224 13:28:07.221233  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:28:07.221241  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:28:07.221351  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:28:07.282961  953268 cri.go:89] found id: ""
	I0224 13:28:07.283000  953268 logs.go:282] 0 containers: []
	W0224 13:28:07.283014  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:28:07.283022  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:28:07.283095  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:28:07.329449  953268 cri.go:89] found id: ""
	I0224 13:28:07.329484  953268 logs.go:282] 0 containers: []
	W0224 13:28:07.329493  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:28:07.329499  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:28:07.329573  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:28:07.367914  953268 cri.go:89] found id: ""
	I0224 13:28:07.367949  953268 logs.go:282] 0 containers: []
	W0224 13:28:07.367961  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:28:07.367976  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:28:07.367994  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:28:07.429154  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:28:07.429206  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:28:07.444599  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:28:07.444640  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:28:07.519173  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:28:07.519221  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:28:07.519238  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:28:07.619120  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:28:07.619169  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:28:10.170573  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:28:10.186272  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:28:10.186333  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:28:10.227253  953268 cri.go:89] found id: ""
	I0224 13:28:10.227286  953268 logs.go:282] 0 containers: []
	W0224 13:28:10.227298  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:28:10.227306  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:28:10.227375  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:28:10.266233  953268 cri.go:89] found id: ""
	I0224 13:28:10.266271  953268 logs.go:282] 0 containers: []
	W0224 13:28:10.266284  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:28:10.266293  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:28:10.266369  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:28:10.306651  953268 cri.go:89] found id: ""
	I0224 13:28:10.306685  953268 logs.go:282] 0 containers: []
	W0224 13:28:10.306699  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:28:10.306707  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:28:10.306782  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:28:10.355133  953268 cri.go:89] found id: ""
	I0224 13:28:10.355172  953268 logs.go:282] 0 containers: []
	W0224 13:28:10.355184  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:28:10.355193  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:28:10.355269  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:28:10.397751  953268 cri.go:89] found id: ""
	I0224 13:28:10.397788  953268 logs.go:282] 0 containers: []
	W0224 13:28:10.397801  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:28:10.397807  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:28:10.397877  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:28:10.442079  953268 cri.go:89] found id: ""
	I0224 13:28:10.442113  953268 logs.go:282] 0 containers: []
	W0224 13:28:10.442125  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:28:10.442135  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:28:10.442224  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:28:10.480062  953268 cri.go:89] found id: ""
	I0224 13:28:10.480098  953268 logs.go:282] 0 containers: []
	W0224 13:28:10.480110  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:28:10.480118  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:28:10.480184  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:28:10.530609  953268 cri.go:89] found id: ""
	I0224 13:28:10.530650  953268 logs.go:282] 0 containers: []
	W0224 13:28:10.530662  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:28:10.530677  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:28:10.530691  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:28:10.611340  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:28:10.611387  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:28:10.671097  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:28:10.671158  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:28:10.724457  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:28:10.724500  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:28:10.741796  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:28:10.741834  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:28:10.821514  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:28:13.323210  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:28:13.336609  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:28:13.336678  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:28:13.372121  953268 cri.go:89] found id: ""
	I0224 13:28:13.372152  953268 logs.go:282] 0 containers: []
	W0224 13:28:13.372162  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:28:13.372169  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:28:13.372245  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:28:13.406875  953268 cri.go:89] found id: ""
	I0224 13:28:13.406914  953268 logs.go:282] 0 containers: []
	W0224 13:28:13.406928  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:28:13.406943  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:28:13.407012  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:28:13.445150  953268 cri.go:89] found id: ""
	I0224 13:28:13.445192  953268 logs.go:282] 0 containers: []
	W0224 13:28:13.445205  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:28:13.445213  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:28:13.445288  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:28:13.493973  953268 cri.go:89] found id: ""
	I0224 13:28:13.494009  953268 logs.go:282] 0 containers: []
	W0224 13:28:13.494021  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:28:13.494029  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:28:13.494104  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:28:13.536118  953268 cri.go:89] found id: ""
	I0224 13:28:13.536158  953268 logs.go:282] 0 containers: []
	W0224 13:28:13.536171  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:28:13.536179  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:28:13.536237  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:28:13.574179  953268 cri.go:89] found id: ""
	I0224 13:28:13.574227  953268 logs.go:282] 0 containers: []
	W0224 13:28:13.574237  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:28:13.574243  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:28:13.574314  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:28:13.612843  953268 cri.go:89] found id: ""
	I0224 13:28:13.612878  953268 logs.go:282] 0 containers: []
	W0224 13:28:13.612890  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:28:13.612899  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:28:13.612977  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:28:13.648018  953268 cri.go:89] found id: ""
	I0224 13:28:13.648051  953268 logs.go:282] 0 containers: []
	W0224 13:28:13.648060  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:28:13.648071  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:28:13.648086  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:28:13.718781  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:28:13.718818  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:28:13.718833  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:28:13.790942  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:28:13.790988  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:28:13.833491  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:28:13.833531  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:28:13.888309  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:28:13.888355  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:28:16.403004  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:28:16.417107  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:28:16.417192  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:28:16.459410  953268 cri.go:89] found id: ""
	I0224 13:28:16.459449  953268 logs.go:282] 0 containers: []
	W0224 13:28:16.459471  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:28:16.459480  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:28:16.459559  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:28:16.506397  953268 cri.go:89] found id: ""
	I0224 13:28:16.506436  953268 logs.go:282] 0 containers: []
	W0224 13:28:16.506448  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:28:16.506456  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:28:16.506529  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:28:16.543260  953268 cri.go:89] found id: ""
	I0224 13:28:16.543299  953268 logs.go:282] 0 containers: []
	W0224 13:28:16.543311  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:28:16.543320  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:28:16.543385  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:28:16.588498  953268 cri.go:89] found id: ""
	I0224 13:28:16.588537  953268 logs.go:282] 0 containers: []
	W0224 13:28:16.588549  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:28:16.588558  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:28:16.588631  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:28:16.638938  953268 cri.go:89] found id: ""
	I0224 13:28:16.638979  953268 logs.go:282] 0 containers: []
	W0224 13:28:16.638990  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:28:16.638999  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:28:16.639071  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:28:16.673923  953268 cri.go:89] found id: ""
	I0224 13:28:16.673958  953268 logs.go:282] 0 containers: []
	W0224 13:28:16.673968  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:28:16.673974  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:28:16.674053  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:28:16.717632  953268 cri.go:89] found id: ""
	I0224 13:28:16.717664  953268 logs.go:282] 0 containers: []
	W0224 13:28:16.717676  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:28:16.717685  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:28:16.717753  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:28:16.756591  953268 cri.go:89] found id: ""
	I0224 13:28:16.756625  953268 logs.go:282] 0 containers: []
	W0224 13:28:16.756636  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:28:16.756652  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:28:16.756668  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:28:16.800316  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:28:16.800362  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:28:16.851293  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:28:16.851339  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:28:16.866691  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:28:16.866728  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:28:16.953333  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:28:16.953355  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:28:16.953367  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:28:19.539039  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:28:19.557851  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:28:19.557948  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:28:19.598025  953268 cri.go:89] found id: ""
	I0224 13:28:19.598065  953268 logs.go:282] 0 containers: []
	W0224 13:28:19.598078  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:28:19.598087  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:28:19.598158  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:28:19.640492  953268 cri.go:89] found id: ""
	I0224 13:28:19.640526  953268 logs.go:282] 0 containers: []
	W0224 13:28:19.640539  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:28:19.640548  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:28:19.640615  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:28:19.680025  953268 cri.go:89] found id: ""
	I0224 13:28:19.680064  953268 logs.go:282] 0 containers: []
	W0224 13:28:19.680076  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:28:19.680085  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:28:19.680155  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:28:19.723028  953268 cri.go:89] found id: ""
	I0224 13:28:19.723059  953268 logs.go:282] 0 containers: []
	W0224 13:28:19.723072  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:28:19.723081  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:28:19.723142  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:28:19.766384  953268 cri.go:89] found id: ""
	I0224 13:28:19.766421  953268 logs.go:282] 0 containers: []
	W0224 13:28:19.766434  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:28:19.766444  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:28:19.766519  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:28:19.802725  953268 cri.go:89] found id: ""
	I0224 13:28:19.802756  953268 logs.go:282] 0 containers: []
	W0224 13:28:19.802765  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:28:19.802772  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:28:19.802822  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:28:19.838685  953268 cri.go:89] found id: ""
	I0224 13:28:19.838721  953268 logs.go:282] 0 containers: []
	W0224 13:28:19.838733  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:28:19.838741  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:28:19.838811  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:28:19.874790  953268 cri.go:89] found id: ""
	I0224 13:28:19.874831  953268 logs.go:282] 0 containers: []
	W0224 13:28:19.874844  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:28:19.874858  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:28:19.874873  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:28:19.915268  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:28:19.915300  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:28:19.968694  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:28:19.968756  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:28:19.983823  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:28:19.983859  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:28:20.058288  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:28:20.058321  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:28:20.058342  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:28:22.637430  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:28:22.650924  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:28:22.651010  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:28:22.692754  953268 cri.go:89] found id: ""
	I0224 13:28:22.692787  953268 logs.go:282] 0 containers: []
	W0224 13:28:22.692794  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:28:22.692801  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:28:22.692862  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:28:22.734384  953268 cri.go:89] found id: ""
	I0224 13:28:22.734418  953268 logs.go:282] 0 containers: []
	W0224 13:28:22.734431  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:28:22.734439  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:28:22.734504  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:28:22.779717  953268 cri.go:89] found id: ""
	I0224 13:28:22.779760  953268 logs.go:282] 0 containers: []
	W0224 13:28:22.779772  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:28:22.779780  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:28:22.779844  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:28:22.815371  953268 cri.go:89] found id: ""
	I0224 13:28:22.815408  953268 logs.go:282] 0 containers: []
	W0224 13:28:22.815421  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:28:22.815431  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:28:22.815506  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:28:22.861234  953268 cri.go:89] found id: ""
	I0224 13:28:22.861272  953268 logs.go:282] 0 containers: []
	W0224 13:28:22.861283  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:28:22.861291  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:28:22.861383  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:28:22.902809  953268 cri.go:89] found id: ""
	I0224 13:28:22.902844  953268 logs.go:282] 0 containers: []
	W0224 13:28:22.902855  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:28:22.902861  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:28:22.902936  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:28:22.942959  953268 cri.go:89] found id: ""
	I0224 13:28:22.942992  953268 logs.go:282] 0 containers: []
	W0224 13:28:22.943002  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:28:22.943008  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:28:22.943077  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:28:22.980342  953268 cri.go:89] found id: ""
	I0224 13:28:22.980373  953268 logs.go:282] 0 containers: []
	W0224 13:28:22.980382  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:28:22.980394  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:28:22.980417  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:28:23.062314  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:28:23.062361  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:28:23.111038  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:28:23.111078  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:28:23.162781  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:28:23.162822  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:28:23.179739  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:28:23.179774  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:28:23.252800  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:28:25.753483  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:28:25.767870  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:28:25.767936  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:28:25.811201  953268 cri.go:89] found id: ""
	I0224 13:28:25.811236  953268 logs.go:282] 0 containers: []
	W0224 13:28:25.811247  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:28:25.811257  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:28:25.811321  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:28:25.853144  953268 cri.go:89] found id: ""
	I0224 13:28:25.853168  953268 logs.go:282] 0 containers: []
	W0224 13:28:25.853176  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:28:25.853183  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:28:25.853267  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:28:25.897811  953268 cri.go:89] found id: ""
	I0224 13:28:25.897840  953268 logs.go:282] 0 containers: []
	W0224 13:28:25.897852  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:28:25.897860  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:28:25.897920  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:28:25.949146  953268 cri.go:89] found id: ""
	I0224 13:28:25.949183  953268 logs.go:282] 0 containers: []
	W0224 13:28:25.949195  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:28:25.949204  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:28:25.949283  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:28:25.996838  953268 cri.go:89] found id: ""
	I0224 13:28:25.996872  953268 logs.go:282] 0 containers: []
	W0224 13:28:25.996885  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:28:25.996893  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:28:25.996956  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:28:26.042616  953268 cri.go:89] found id: ""
	I0224 13:28:26.042644  953268 logs.go:282] 0 containers: []
	W0224 13:28:26.042661  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:28:26.042669  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:28:26.042722  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:28:26.091193  953268 cri.go:89] found id: ""
	I0224 13:28:26.091228  953268 logs.go:282] 0 containers: []
	W0224 13:28:26.091243  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:28:26.091251  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:28:26.091325  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:28:26.140116  953268 cri.go:89] found id: ""
	I0224 13:28:26.140231  953268 logs.go:282] 0 containers: []
	W0224 13:28:26.140249  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:28:26.140264  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:28:26.140283  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:28:26.223375  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:28:26.223414  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:28:26.223431  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:28:26.299841  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:28:26.299887  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 13:28:26.355718  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:28:26.355752  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:28:26.416103  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:28:26.416146  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:28:28.935841  953268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:28:28.949263  953268 kubeadm.go:597] duration metric: took 4m4.617701203s to restartPrimaryControlPlane
	W0224 13:28:28.949390  953268 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0224 13:28:28.949425  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0224 13:28:29.772108  953268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 13:28:29.789712  953268 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0224 13:28:29.802772  953268 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0224 13:28:29.815407  953268 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0224 13:28:29.815430  953268 kubeadm.go:157] found existing configuration files:
	
	I0224 13:28:29.815482  953268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0224 13:28:29.827499  953268 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0224 13:28:29.827567  953268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0224 13:28:29.839915  953268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0224 13:28:29.851908  953268 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0224 13:28:29.852010  953268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0224 13:28:29.864615  953268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0224 13:28:29.876643  953268 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0224 13:28:29.876709  953268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0224 13:28:29.889036  953268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0224 13:28:29.899192  953268 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0224 13:28:29.899274  953268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0224 13:28:29.909825  953268 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0224 13:28:29.982106  953268 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0224 13:28:29.982164  953268 kubeadm.go:310] [preflight] Running pre-flight checks
	I0224 13:28:30.129471  953268 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0224 13:28:30.129629  953268 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0224 13:28:30.129775  953268 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0224 13:28:30.343997  953268 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0224 13:28:30.346113  953268 out.go:235]   - Generating certificates and keys ...
	I0224 13:28:30.346233  953268 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0224 13:28:30.346361  953268 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0224 13:28:30.346485  953268 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0224 13:28:30.346595  953268 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0224 13:28:30.346708  953268 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0224 13:28:30.346783  953268 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0224 13:28:30.346864  953268 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0224 13:28:30.346919  953268 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0224 13:28:30.346977  953268 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0224 13:28:30.347035  953268 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0224 13:28:30.347070  953268 kubeadm.go:310] [certs] Using the existing "sa" key
	I0224 13:28:30.347129  953268 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0224 13:28:30.457032  953268 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0224 13:28:30.782844  953268 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0224 13:28:30.842271  953268 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0224 13:28:31.034737  953268 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0224 13:28:31.050979  953268 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0224 13:28:31.053460  953268 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0224 13:28:31.053679  953268 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0224 13:28:31.205503  953268 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0224 13:28:31.207386  953268 out.go:235]   - Booting up control plane ...
	I0224 13:28:31.207542  953268 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0224 13:28:31.217370  953268 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0224 13:28:31.218449  953268 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0224 13:28:31.219212  953268 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0224 13:28:31.223840  953268 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0224 13:29:11.224324  953268 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0224 13:29:11.225286  953268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:29:11.225572  953268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:29:16.226144  953268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:29:16.226358  953268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:29:26.227187  953268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:29:26.227476  953268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:29:46.228012  953268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:29:46.228297  953268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:30:26.229952  953268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:30:26.230229  953268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:30:26.230260  953268 kubeadm.go:310] 
	I0224 13:30:26.230300  953268 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0224 13:30:26.230364  953268 kubeadm.go:310] 		timed out waiting for the condition
	I0224 13:30:26.230392  953268 kubeadm.go:310] 
	I0224 13:30:26.230441  953268 kubeadm.go:310] 	This error is likely caused by:
	I0224 13:30:26.230505  953268 kubeadm.go:310] 		- The kubelet is not running
	I0224 13:30:26.230648  953268 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0224 13:30:26.230661  953268 kubeadm.go:310] 
	I0224 13:30:26.230806  953268 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0224 13:30:26.230857  953268 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0224 13:30:26.230902  953268 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0224 13:30:26.230911  953268 kubeadm.go:310] 
	I0224 13:30:26.231038  953268 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0224 13:30:26.231147  953268 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0224 13:30:26.231163  953268 kubeadm.go:310] 
	I0224 13:30:26.231301  953268 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0224 13:30:26.231435  953268 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0224 13:30:26.231545  953268 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0224 13:30:26.231657  953268 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0224 13:30:26.231675  953268 kubeadm.go:310] 
	I0224 13:30:26.232473  953268 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0224 13:30:26.232591  953268 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0224 13:30:26.232710  953268 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0224 13:30:26.232936  953268 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0224 13:30:26.232991  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0224 13:30:26.704666  953268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 13:30:26.720451  953268 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0224 13:30:26.732280  953268 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0224 13:30:26.732306  953268 kubeadm.go:157] found existing configuration files:
	
	I0224 13:30:26.732371  953268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0224 13:30:26.743971  953268 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0224 13:30:26.744050  953268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0224 13:30:26.755216  953268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0224 13:30:26.766460  953268 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0224 13:30:26.766542  953268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0224 13:30:26.778117  953268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0224 13:30:26.789142  953268 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0224 13:30:26.789208  953268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0224 13:30:26.800621  953268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0224 13:30:26.811672  953268 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0224 13:30:26.811755  953268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0224 13:30:26.823061  953268 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0224 13:30:27.039614  953268 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0224 13:32:23.115672  953268 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0224 13:32:23.115858  953268 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0224 13:32:23.117520  953268 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0224 13:32:23.117626  953268 kubeadm.go:310] [preflight] Running pre-flight checks
	I0224 13:32:23.117831  953268 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0224 13:32:23.118008  953268 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0224 13:32:23.118171  953268 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0224 13:32:23.118281  953268 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0224 13:32:23.120434  953268 out.go:235]   - Generating certificates and keys ...
	I0224 13:32:23.120529  953268 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0224 13:32:23.120621  953268 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0224 13:32:23.120736  953268 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0224 13:32:23.120819  953268 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0224 13:32:23.120905  953268 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0224 13:32:23.120957  953268 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0224 13:32:23.121011  953268 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0224 13:32:23.121066  953268 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0224 13:32:23.121134  953268 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0224 13:32:23.121202  953268 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0224 13:32:23.121237  953268 kubeadm.go:310] [certs] Using the existing "sa" key
	I0224 13:32:23.121355  953268 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0224 13:32:23.121422  953268 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0224 13:32:23.121526  953268 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0224 13:32:23.121602  953268 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0224 13:32:23.121654  953268 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0224 13:32:23.121775  953268 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0224 13:32:23.121914  953268 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0224 13:32:23.121964  953268 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0224 13:32:23.122028  953268 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0224 13:32:23.123732  953268 out.go:235]   - Booting up control plane ...
	I0224 13:32:23.123835  953268 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0224 13:32:23.123904  953268 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0224 13:32:23.123986  953268 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0224 13:32:23.124096  953268 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0224 13:32:23.124279  953268 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0224 13:32:23.124332  953268 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0224 13:32:23.124401  953268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:32:23.124595  953268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:32:23.124691  953268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:32:23.124893  953268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:32:23.124960  953268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:32:23.125150  953268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:32:23.125220  953268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:32:23.125409  953268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:32:23.125508  953268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:32:23.125791  953268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:32:23.125817  953268 kubeadm.go:310] 
	I0224 13:32:23.125871  953268 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0224 13:32:23.125925  953268 kubeadm.go:310] 		timed out waiting for the condition
	I0224 13:32:23.125935  953268 kubeadm.go:310] 
	I0224 13:32:23.125985  953268 kubeadm.go:310] 	This error is likely caused by:
	I0224 13:32:23.126040  953268 kubeadm.go:310] 		- The kubelet is not running
	I0224 13:32:23.126194  953268 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0224 13:32:23.126222  953268 kubeadm.go:310] 
	I0224 13:32:23.126328  953268 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0224 13:32:23.126364  953268 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0224 13:32:23.126411  953268 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0224 13:32:23.126421  953268 kubeadm.go:310] 
	I0224 13:32:23.126543  953268 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0224 13:32:23.126655  953268 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0224 13:32:23.126665  953268 kubeadm.go:310] 
	I0224 13:32:23.126777  953268 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0224 13:32:23.126856  953268 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0224 13:32:23.126925  953268 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0224 13:32:23.127003  953268 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0224 13:32:23.127087  953268 kubeadm.go:310] 
	I0224 13:32:23.127095  953268 kubeadm.go:394] duration metric: took 7m58.850238597s to StartCluster
	I0224 13:32:23.127168  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:32:23.127245  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:32:23.173206  953268 cri.go:89] found id: ""
	I0224 13:32:23.173252  953268 logs.go:282] 0 containers: []
	W0224 13:32:23.173265  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:32:23.173274  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:32:23.173355  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:32:23.220974  953268 cri.go:89] found id: ""
	I0224 13:32:23.221008  953268 logs.go:282] 0 containers: []
	W0224 13:32:23.221017  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:32:23.221024  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:32:23.221095  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:32:23.256282  953268 cri.go:89] found id: ""
	I0224 13:32:23.256316  953268 logs.go:282] 0 containers: []
	W0224 13:32:23.256327  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:32:23.256335  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:32:23.256423  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:32:23.292296  953268 cri.go:89] found id: ""
	I0224 13:32:23.292329  953268 logs.go:282] 0 containers: []
	W0224 13:32:23.292340  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:32:23.292355  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:32:23.292422  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:32:23.328368  953268 cri.go:89] found id: ""
	I0224 13:32:23.328399  953268 logs.go:282] 0 containers: []
	W0224 13:32:23.328408  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:32:23.328414  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:32:23.328488  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:32:23.380963  953268 cri.go:89] found id: ""
	I0224 13:32:23.380995  953268 logs.go:282] 0 containers: []
	W0224 13:32:23.381005  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:32:23.381014  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:32:23.381083  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:32:23.448170  953268 cri.go:89] found id: ""
	I0224 13:32:23.448206  953268 logs.go:282] 0 containers: []
	W0224 13:32:23.448219  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:32:23.448227  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:32:23.448301  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:32:23.494938  953268 cri.go:89] found id: ""
	I0224 13:32:23.494969  953268 logs.go:282] 0 containers: []
	W0224 13:32:23.494978  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:32:23.494989  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:32:23.495004  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:32:23.545770  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:32:23.545817  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:32:23.561559  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:32:23.561608  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:32:23.639942  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:32:23.639969  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:32:23.639983  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:32:23.748671  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:32:23.748715  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0224 13:32:23.790465  953268 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0224 13:32:23.790543  953268 out.go:270] * 
	* 
	W0224 13:32:23.790632  953268 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0224 13:32:23.790650  953268 out.go:270] * 
	* 
	W0224 13:32:23.791585  953268 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0224 13:32:23.796216  953268 out.go:201] 
	W0224 13:32:23.797430  953268 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0224 13:32:23.797505  953268 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0224 13:32:23.797547  953268 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0224 13:32:23.799102  953268 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-233759 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-233759 -n old-k8s-version-233759
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-233759 -n old-k8s-version-233759: exit status 2 (238.708719ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-233759 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| image   | embed-certs-037381 image list                          | embed-certs-037381           | jenkins | v1.35.0 | 24 Feb 25 13:27 UTC | 24 Feb 25 13:27 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-037381                                  | embed-certs-037381           | jenkins | v1.35.0 | 24 Feb 25 13:27 UTC | 24 Feb 25 13:27 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-037381                                  | embed-certs-037381           | jenkins | v1.35.0 | 24 Feb 25 13:27 UTC | 24 Feb 25 13:27 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-037381                                  | embed-certs-037381           | jenkins | v1.35.0 | 24 Feb 25 13:27 UTC | 24 Feb 25 13:27 UTC |
	| delete  | -p embed-certs-037381                                  | embed-certs-037381           | jenkins | v1.35.0 | 24 Feb 25 13:27 UTC | 24 Feb 25 13:27 UTC |
	| start   | -p newest-cni-651381 --memory=2200 --alsologtostderr   | newest-cni-651381            | jenkins | v1.35.0 | 24 Feb 25 13:27 UTC | 24 Feb 25 13:28 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| image   | no-preload-956442 image list                           | no-preload-956442            | jenkins | v1.35.0 | 24 Feb 25 13:27 UTC | 24 Feb 25 13:27 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-956442                                   | no-preload-956442            | jenkins | v1.35.0 | 24 Feb 25 13:27 UTC | 24 Feb 25 13:27 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-956442                                   | no-preload-956442            | jenkins | v1.35.0 | 24 Feb 25 13:27 UTC | 24 Feb 25 13:27 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-956442                                   | no-preload-956442            | jenkins | v1.35.0 | 24 Feb 25 13:27 UTC | 24 Feb 25 13:27 UTC |
	| delete  | -p no-preload-956442                                   | no-preload-956442            | jenkins | v1.35.0 | 24 Feb 25 13:27 UTC | 24 Feb 25 13:27 UTC |
	| image   | default-k8s-diff-port-108648                           | default-k8s-diff-port-108648 | jenkins | v1.35.0 | 24 Feb 25 13:27 UTC | 24 Feb 25 13:27 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-108648 | jenkins | v1.35.0 | 24 Feb 25 13:27 UTC | 24 Feb 25 13:27 UTC |
	|         | default-k8s-diff-port-108648                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-108648 | jenkins | v1.35.0 | 24 Feb 25 13:27 UTC | 24 Feb 25 13:27 UTC |
	|         | default-k8s-diff-port-108648                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-108648 | jenkins | v1.35.0 | 24 Feb 25 13:27 UTC | 24 Feb 25 13:27 UTC |
	|         | default-k8s-diff-port-108648                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-108648 | jenkins | v1.35.0 | 24 Feb 25 13:27 UTC | 24 Feb 25 13:27 UTC |
	|         | default-k8s-diff-port-108648                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-651381             | newest-cni-651381            | jenkins | v1.35.0 | 24 Feb 25 13:28 UTC | 24 Feb 25 13:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-651381                                   | newest-cni-651381            | jenkins | v1.35.0 | 24 Feb 25 13:28 UTC | 24 Feb 25 13:28 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-651381                  | newest-cni-651381            | jenkins | v1.35.0 | 24 Feb 25 13:28 UTC | 24 Feb 25 13:28 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-651381 --memory=2200 --alsologtostderr   | newest-cni-651381            | jenkins | v1.35.0 | 24 Feb 25 13:28 UTC | 24 Feb 25 13:29 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| image   | newest-cni-651381 image list                           | newest-cni-651381            | jenkins | v1.35.0 | 24 Feb 25 13:29 UTC | 24 Feb 25 13:29 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-651381                                   | newest-cni-651381            | jenkins | v1.35.0 | 24 Feb 25 13:29 UTC | 24 Feb 25 13:29 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-651381                                   | newest-cni-651381            | jenkins | v1.35.0 | 24 Feb 25 13:29 UTC | 24 Feb 25 13:29 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-651381                                   | newest-cni-651381            | jenkins | v1.35.0 | 24 Feb 25 13:29 UTC | 24 Feb 25 13:29 UTC |
	| delete  | -p newest-cni-651381                                   | newest-cni-651381            | jenkins | v1.35.0 | 24 Feb 25 13:29 UTC | 24 Feb 25 13:29 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/24 13:28:38
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0224 13:28:38.792971  956077 out.go:345] Setting OutFile to fd 1 ...
	I0224 13:28:38.793077  956077 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 13:28:38.793085  956077 out.go:358] Setting ErrFile to fd 2...
	I0224 13:28:38.793089  956077 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 13:28:38.793277  956077 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-887294/.minikube/bin
	I0224 13:28:38.793883  956077 out.go:352] Setting JSON to false
	I0224 13:28:38.794844  956077 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":11460,"bootTime":1740392259,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0224 13:28:38.794956  956077 start.go:139] virtualization: kvm guest
	I0224 13:28:38.797461  956077 out.go:177] * [newest-cni-651381] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0224 13:28:38.798901  956077 out.go:177]   - MINIKUBE_LOCATION=20451
	I0224 13:28:38.798939  956077 notify.go:220] Checking for updates...
	I0224 13:28:38.801509  956077 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 13:28:38.802725  956077 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20451-887294/kubeconfig
	I0224 13:28:38.804035  956077 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-887294/.minikube
	I0224 13:28:38.805462  956077 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0224 13:28:38.806731  956077 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0224 13:28:38.808519  956077 config.go:182] Loaded profile config "newest-cni-651381": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0224 13:28:38.808929  956077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:28:38.808983  956077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:28:38.824230  956077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33847
	I0224 13:28:38.824657  956077 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:28:38.825223  956077 main.go:141] libmachine: Using API Version  1
	I0224 13:28:38.825247  956077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:28:38.825706  956077 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:28:38.825963  956077 main.go:141] libmachine: (newest-cni-651381) Calling .DriverName
	I0224 13:28:38.826250  956077 driver.go:394] Setting default libvirt URI to qemu:///system
	I0224 13:28:38.826574  956077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:28:38.826623  956077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:28:38.841716  956077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37847
	I0224 13:28:38.842131  956077 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:28:38.842597  956077 main.go:141] libmachine: Using API Version  1
	I0224 13:28:38.842619  956077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:28:38.842935  956077 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:28:38.843142  956077 main.go:141] libmachine: (newest-cni-651381) Calling .DriverName
	I0224 13:28:38.879934  956077 out.go:177] * Using the kvm2 driver based on existing profile
	I0224 13:28:38.881238  956077 start.go:297] selected driver: kvm2
	I0224 13:28:38.881261  956077 start.go:901] validating driver "kvm2" against &{Name:newest-cni-651381 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:newest-cni-651381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPort
s:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0224 13:28:38.881430  956077 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0224 13:28:38.882088  956077 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 13:28:38.882170  956077 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20451-887294/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0224 13:28:38.897736  956077 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0224 13:28:38.898150  956077 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0224 13:28:38.898189  956077 cni.go:84] Creating CNI manager for ""
	I0224 13:28:38.898247  956077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0224 13:28:38.898285  956077 start.go:340] cluster config:
	{Name:newest-cni-651381 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-651381 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0224 13:28:38.898383  956077 iso.go:125] acquiring lock: {Name:mk57408cca66a96a13d93cda43cdfac6e61aef3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 13:28:38.900247  956077 out.go:177] * Starting "newest-cni-651381" primary control-plane node in "newest-cni-651381" cluster
	I0224 13:28:38.901467  956077 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0224 13:28:38.901516  956077 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20451-887294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0224 13:28:38.901527  956077 cache.go:56] Caching tarball of preloaded images
	I0224 13:28:38.901613  956077 preload.go:172] Found /home/jenkins/minikube-integration/20451-887294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0224 13:28:38.901623  956077 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0224 13:28:38.901723  956077 profile.go:143] Saving config to /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/newest-cni-651381/config.json ...
	I0224 13:28:38.901897  956077 start.go:360] acquireMachinesLock for newest-cni-651381: {Name:mk023761b01bb629a1acd40bc8104cc517b0e15b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0224 13:28:38.901940  956077 start.go:364] duration metric: took 22.052µs to acquireMachinesLock for "newest-cni-651381"
	I0224 13:28:38.901954  956077 start.go:96] Skipping create...Using existing machine configuration
	I0224 13:28:38.901962  956077 fix.go:54] fixHost starting: 
	I0224 13:28:38.902241  956077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:28:38.902287  956077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:28:38.917188  956077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45863
	I0224 13:28:38.917773  956077 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:28:38.918380  956077 main.go:141] libmachine: Using API Version  1
	I0224 13:28:38.918452  956077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:28:38.918772  956077 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:28:38.918951  956077 main.go:141] libmachine: (newest-cni-651381) Calling .DriverName
	I0224 13:28:38.919074  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetState
	I0224 13:28:38.920729  956077 fix.go:112] recreateIfNeeded on newest-cni-651381: state=Stopped err=<nil>
	I0224 13:28:38.920774  956077 main.go:141] libmachine: (newest-cni-651381) Calling .DriverName
	W0224 13:28:38.920911  956077 fix.go:138] unexpected machine state, will restart: <nil>
	I0224 13:28:38.922862  956077 out.go:177] * Restarting existing kvm2 VM for "newest-cni-651381" ...
	I0224 13:28:38.924182  956077 main.go:141] libmachine: (newest-cni-651381) Calling .Start
	I0224 13:28:38.924366  956077 main.go:141] libmachine: (newest-cni-651381) starting domain...
	I0224 13:28:38.924388  956077 main.go:141] libmachine: (newest-cni-651381) ensuring networks are active...
	I0224 13:28:38.925130  956077 main.go:141] libmachine: (newest-cni-651381) Ensuring network default is active
	I0224 13:28:38.925476  956077 main.go:141] libmachine: (newest-cni-651381) Ensuring network mk-newest-cni-651381 is active
	I0224 13:28:38.925802  956077 main.go:141] libmachine: (newest-cni-651381) getting domain XML...
	I0224 13:28:38.926703  956077 main.go:141] libmachine: (newest-cni-651381) creating domain...
	I0224 13:28:40.156271  956077 main.go:141] libmachine: (newest-cni-651381) waiting for IP...
	I0224 13:28:40.157205  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:40.157681  956077 main.go:141] libmachine: (newest-cni-651381) DBG | unable to find current IP address of domain newest-cni-651381 in network mk-newest-cni-651381
	I0224 13:28:40.157772  956077 main.go:141] libmachine: (newest-cni-651381) DBG | I0224 13:28:40.157685  956112 retry.go:31] will retry after 260.668185ms: waiting for domain to come up
	I0224 13:28:40.420311  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:40.420800  956077 main.go:141] libmachine: (newest-cni-651381) DBG | unable to find current IP address of domain newest-cni-651381 in network mk-newest-cni-651381
	I0224 13:28:40.420848  956077 main.go:141] libmachine: (newest-cni-651381) DBG | I0224 13:28:40.420767  956112 retry.go:31] will retry after 303.764677ms: waiting for domain to come up
	I0224 13:28:40.726666  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:40.727228  956077 main.go:141] libmachine: (newest-cni-651381) DBG | unable to find current IP address of domain newest-cni-651381 in network mk-newest-cni-651381
	I0224 13:28:40.727281  956077 main.go:141] libmachine: (newest-cni-651381) DBG | I0224 13:28:40.727200  956112 retry.go:31] will retry after 355.373964ms: waiting for domain to come up
	I0224 13:28:41.083712  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:41.084293  956077 main.go:141] libmachine: (newest-cni-651381) DBG | unable to find current IP address of domain newest-cni-651381 in network mk-newest-cni-651381
	I0224 13:28:41.084350  956077 main.go:141] libmachine: (newest-cni-651381) DBG | I0224 13:28:41.084276  956112 retry.go:31] will retry after 470.293336ms: waiting for domain to come up
	I0224 13:28:41.556004  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:41.556503  956077 main.go:141] libmachine: (newest-cni-651381) DBG | unable to find current IP address of domain newest-cni-651381 in network mk-newest-cni-651381
	I0224 13:28:41.556533  956077 main.go:141] libmachine: (newest-cni-651381) DBG | I0224 13:28:41.556435  956112 retry.go:31] will retry after 528.413702ms: waiting for domain to come up
	I0224 13:28:42.086215  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:42.086654  956077 main.go:141] libmachine: (newest-cni-651381) DBG | unable to find current IP address of domain newest-cni-651381 in network mk-newest-cni-651381
	I0224 13:28:42.086688  956077 main.go:141] libmachine: (newest-cni-651381) DBG | I0224 13:28:42.086615  956112 retry.go:31] will retry after 758.532968ms: waiting for domain to come up
	I0224 13:28:42.846682  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:42.847289  956077 main.go:141] libmachine: (newest-cni-651381) DBG | unable to find current IP address of domain newest-cni-651381 in network mk-newest-cni-651381
	I0224 13:28:42.847316  956077 main.go:141] libmachine: (newest-cni-651381) DBG | I0224 13:28:42.847249  956112 retry.go:31] will retry after 771.163995ms: waiting for domain to come up
	I0224 13:28:43.620325  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:43.620953  956077 main.go:141] libmachine: (newest-cni-651381) DBG | unable to find current IP address of domain newest-cni-651381 in network mk-newest-cni-651381
	I0224 13:28:43.620987  956077 main.go:141] libmachine: (newest-cni-651381) DBG | I0224 13:28:43.620927  956112 retry.go:31] will retry after 1.349772038s: waiting for domain to come up
	I0224 13:28:44.971949  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:44.972514  956077 main.go:141] libmachine: (newest-cni-651381) DBG | unable to find current IP address of domain newest-cni-651381 in network mk-newest-cni-651381
	I0224 13:28:44.972544  956077 main.go:141] libmachine: (newest-cni-651381) DBG | I0224 13:28:44.972446  956112 retry.go:31] will retry after 1.187923617s: waiting for domain to come up
	I0224 13:28:46.161965  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:46.162503  956077 main.go:141] libmachine: (newest-cni-651381) DBG | unable to find current IP address of domain newest-cni-651381 in network mk-newest-cni-651381
	I0224 13:28:46.162523  956077 main.go:141] libmachine: (newest-cni-651381) DBG | I0224 13:28:46.162464  956112 retry.go:31] will retry after 2.129619904s: waiting for domain to come up
	I0224 13:28:48.294708  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:48.295258  956077 main.go:141] libmachine: (newest-cni-651381) DBG | unable to find current IP address of domain newest-cni-651381 in network mk-newest-cni-651381
	I0224 13:28:48.295292  956077 main.go:141] libmachine: (newest-cni-651381) DBG | I0224 13:28:48.295208  956112 retry.go:31] will retry after 2.033415833s: waiting for domain to come up
	I0224 13:28:50.330158  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:50.330661  956077 main.go:141] libmachine: (newest-cni-651381) DBG | unable to find current IP address of domain newest-cni-651381 in network mk-newest-cni-651381
	I0224 13:28:50.330693  956077 main.go:141] libmachine: (newest-cni-651381) DBG | I0224 13:28:50.330607  956112 retry.go:31] will retry after 3.415912416s: waiting for domain to come up
	I0224 13:28:53.750421  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:53.750924  956077 main.go:141] libmachine: (newest-cni-651381) DBG | unable to find current IP address of domain newest-cni-651381 in network mk-newest-cni-651381
	I0224 13:28:53.750982  956077 main.go:141] libmachine: (newest-cni-651381) DBG | I0224 13:28:53.750908  956112 retry.go:31] will retry after 3.200463394s: waiting for domain to come up
	I0224 13:28:56.955224  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:56.955868  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has current primary IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:56.955897  956077 main.go:141] libmachine: (newest-cni-651381) found domain IP: 192.168.39.43
	I0224 13:28:56.955914  956077 main.go:141] libmachine: (newest-cni-651381) reserving static IP address...
	I0224 13:28:56.956419  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "newest-cni-651381", mac: "52:54:00:1b:98:b8", ip: "192.168.39.43"} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:28:56.956465  956077 main.go:141] libmachine: (newest-cni-651381) DBG | skip adding static IP to network mk-newest-cni-651381 - found existing host DHCP lease matching {name: "newest-cni-651381", mac: "52:54:00:1b:98:b8", ip: "192.168.39.43"}
	I0224 13:28:56.956483  956077 main.go:141] libmachine: (newest-cni-651381) reserved static IP address 192.168.39.43 for domain newest-cni-651381
	I0224 13:28:56.956496  956077 main.go:141] libmachine: (newest-cni-651381) waiting for SSH...
	I0224 13:28:56.956507  956077 main.go:141] libmachine: (newest-cni-651381) DBG | Getting to WaitForSSH function...
	I0224 13:28:56.959046  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:56.959392  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:28:56.959427  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:56.959538  956077 main.go:141] libmachine: (newest-cni-651381) DBG | Using SSH client type: external
	I0224 13:28:56.959564  956077 main.go:141] libmachine: (newest-cni-651381) DBG | Using SSH private key: /home/jenkins/minikube-integration/20451-887294/.minikube/machines/newest-cni-651381/id_rsa (-rw-------)
	I0224 13:28:56.959630  956077 main.go:141] libmachine: (newest-cni-651381) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.43 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20451-887294/.minikube/machines/newest-cni-651381/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0224 13:28:56.959653  956077 main.go:141] libmachine: (newest-cni-651381) DBG | About to run SSH command:
	I0224 13:28:56.959689  956077 main.go:141] libmachine: (newest-cni-651381) DBG | exit 0
	I0224 13:28:57.089584  956077 main.go:141] libmachine: (newest-cni-651381) DBG | SSH cmd err, output: <nil>: 
	I0224 13:28:57.089980  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetConfigRaw
	I0224 13:28:57.090668  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetIP
	I0224 13:28:57.093149  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:57.093555  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:28:57.093576  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:57.093814  956077 profile.go:143] Saving config to /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/newest-cni-651381/config.json ...
	I0224 13:28:57.094015  956077 machine.go:93] provisionDockerMachine start ...
	I0224 13:28:57.094035  956077 main.go:141] libmachine: (newest-cni-651381) Calling .DriverName
	I0224 13:28:57.094293  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHHostname
	I0224 13:28:57.096640  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:57.097039  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:28:57.097068  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:57.097149  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHPort
	I0224 13:28:57.097351  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:28:57.097496  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:28:57.097643  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHUsername
	I0224 13:28:57.097810  956077 main.go:141] libmachine: Using SSH client type: native
	I0224 13:28:57.098046  956077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0224 13:28:57.098063  956077 main.go:141] libmachine: About to run SSH command:
	hostname
	I0224 13:28:57.218057  956077 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0224 13:28:57.218090  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetMachineName
	I0224 13:28:57.218365  956077 buildroot.go:166] provisioning hostname "newest-cni-651381"
	I0224 13:28:57.218404  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetMachineName
	I0224 13:28:57.218597  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHHostname
	I0224 13:28:57.221391  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:57.221750  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:28:57.221778  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:57.221974  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHPort
	I0224 13:28:57.222142  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:28:57.222294  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:28:57.222392  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHUsername
	I0224 13:28:57.222531  956077 main.go:141] libmachine: Using SSH client type: native
	I0224 13:28:57.222718  956077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0224 13:28:57.222731  956077 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-651381 && echo "newest-cni-651381" | sudo tee /etc/hostname
	I0224 13:28:57.354081  956077 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-651381
	
	I0224 13:28:57.354129  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHHostname
	I0224 13:28:57.357103  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:57.357516  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:28:57.357552  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:57.357765  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHPort
	I0224 13:28:57.357998  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:28:57.358156  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:28:57.358339  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHUsername
	I0224 13:28:57.358627  956077 main.go:141] libmachine: Using SSH client type: native
	I0224 13:28:57.358827  956077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0224 13:28:57.358843  956077 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-651381' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-651381/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-651381' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0224 13:28:57.483573  956077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0224 13:28:57.483608  956077 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20451-887294/.minikube CaCertPath:/home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20451-887294/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20451-887294/.minikube}
	I0224 13:28:57.483657  956077 buildroot.go:174] setting up certificates
	I0224 13:28:57.483671  956077 provision.go:84] configureAuth start
	I0224 13:28:57.483688  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetMachineName
	I0224 13:28:57.484035  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetIP
	I0224 13:28:57.486755  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:57.487062  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:28:57.487093  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:57.487216  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHHostname
	I0224 13:28:57.489282  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:57.489619  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:28:57.489647  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:57.489808  956077 provision.go:143] copyHostCerts
	I0224 13:28:57.489880  956077 exec_runner.go:144] found /home/jenkins/minikube-integration/20451-887294/.minikube/ca.pem, removing ...
	I0224 13:28:57.489894  956077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20451-887294/.minikube/ca.pem
	I0224 13:28:57.489977  956077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20451-887294/.minikube/ca.pem (1082 bytes)
	I0224 13:28:57.490110  956077 exec_runner.go:144] found /home/jenkins/minikube-integration/20451-887294/.minikube/cert.pem, removing ...
	I0224 13:28:57.490121  956077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20451-887294/.minikube/cert.pem
	I0224 13:28:57.490161  956077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20451-887294/.minikube/cert.pem (1123 bytes)
	I0224 13:28:57.490254  956077 exec_runner.go:144] found /home/jenkins/minikube-integration/20451-887294/.minikube/key.pem, removing ...
	I0224 13:28:57.490264  956077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20451-887294/.minikube/key.pem
	I0224 13:28:57.490300  956077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20451-887294/.minikube/key.pem (1679 bytes)
	I0224 13:28:57.490392  956077 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20451-887294/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca-key.pem org=jenkins.newest-cni-651381 san=[127.0.0.1 192.168.39.43 localhost minikube newest-cni-651381]
	I0224 13:28:57.603657  956077 provision.go:177] copyRemoteCerts
	I0224 13:28:57.603728  956077 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0224 13:28:57.603756  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHHostname
	I0224 13:28:57.606668  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:57.607001  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:28:57.607035  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:57.607186  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHPort
	I0224 13:28:57.607409  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:28:57.607596  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHUsername
	I0224 13:28:57.607747  956077 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/newest-cni-651381/id_rsa Username:docker}
	I0224 13:28:57.696271  956077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0224 13:28:57.720966  956077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0224 13:28:57.745080  956077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0224 13:28:57.770570  956077 provision.go:87] duration metric: took 286.877496ms to configureAuth
	I0224 13:28:57.770610  956077 buildroot.go:189] setting minikube options for container-runtime
	I0224 13:28:57.770819  956077 config.go:182] Loaded profile config "newest-cni-651381": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0224 13:28:57.770914  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHHostname
	I0224 13:28:57.773830  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:57.774134  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:28:57.774182  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:57.774374  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHPort
	I0224 13:28:57.774576  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:28:57.774725  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:28:57.774844  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHUsername
	I0224 13:28:57.774994  956077 main.go:141] libmachine: Using SSH client type: native
	I0224 13:28:57.775210  956077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0224 13:28:57.775229  956077 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0224 13:28:58.015198  956077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0224 13:28:58.015231  956077 machine.go:96] duration metric: took 921.200919ms to provisionDockerMachine
	I0224 13:28:58.015248  956077 start.go:293] postStartSetup for "newest-cni-651381" (driver="kvm2")
	I0224 13:28:58.015261  956077 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0224 13:28:58.015323  956077 main.go:141] libmachine: (newest-cni-651381) Calling .DriverName
	I0224 13:28:58.015781  956077 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0224 13:28:58.015825  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHHostname
	I0224 13:28:58.018588  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:58.018934  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:28:58.018957  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:58.019113  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHPort
	I0224 13:28:58.019321  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:28:58.019495  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHUsername
	I0224 13:28:58.019655  956077 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/newest-cni-651381/id_rsa Username:docker}
	I0224 13:28:58.108667  956077 ssh_runner.go:195] Run: cat /etc/os-release
	I0224 13:28:58.113192  956077 info.go:137] Remote host: Buildroot 2023.02.9
	I0224 13:28:58.113221  956077 filesync.go:126] Scanning /home/jenkins/minikube-integration/20451-887294/.minikube/addons for local assets ...
	I0224 13:28:58.113289  956077 filesync.go:126] Scanning /home/jenkins/minikube-integration/20451-887294/.minikube/files for local assets ...
	I0224 13:28:58.113387  956077 filesync.go:149] local asset: /home/jenkins/minikube-integration/20451-887294/.minikube/files/etc/ssl/certs/8945642.pem -> 8945642.pem in /etc/ssl/certs
	I0224 13:28:58.113476  956077 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0224 13:28:58.123292  956077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/files/etc/ssl/certs/8945642.pem --> /etc/ssl/certs/8945642.pem (1708 bytes)
	I0224 13:28:58.150288  956077 start.go:296] duration metric: took 135.022634ms for postStartSetup
	I0224 13:28:58.150340  956077 fix.go:56] duration metric: took 19.248378049s for fixHost
	I0224 13:28:58.150364  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHHostname
	I0224 13:28:58.152951  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:58.153283  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:28:58.153338  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:58.153514  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHPort
	I0224 13:28:58.153706  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:28:58.153862  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:28:58.154044  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHUsername
	I0224 13:28:58.154233  956077 main.go:141] libmachine: Using SSH client type: native
	I0224 13:28:58.154467  956077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0224 13:28:58.154479  956077 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0224 13:28:58.270588  956077 main.go:141] libmachine: SSH cmd err, output: <nil>: 1740403738.235399997
	
	I0224 13:28:58.270619  956077 fix.go:216] guest clock: 1740403738.235399997
	I0224 13:28:58.270629  956077 fix.go:229] Guest: 2025-02-24 13:28:58.235399997 +0000 UTC Remote: 2025-02-24 13:28:58.150345054 +0000 UTC m=+19.397261834 (delta=85.054943ms)
	I0224 13:28:58.270676  956077 fix.go:200] guest clock delta is within tolerance: 85.054943ms
	I0224 13:28:58.270685  956077 start.go:83] releasing machines lock for "newest-cni-651381", held for 19.368735573s
	I0224 13:28:58.270712  956077 main.go:141] libmachine: (newest-cni-651381) Calling .DriverName
	I0224 13:28:58.271039  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetIP
	I0224 13:28:58.273607  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:58.274111  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:28:58.274137  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:58.274333  956077 main.go:141] libmachine: (newest-cni-651381) Calling .DriverName
	I0224 13:28:58.274936  956077 main.go:141] libmachine: (newest-cni-651381) Calling .DriverName
	I0224 13:28:58.275139  956077 main.go:141] libmachine: (newest-cni-651381) Calling .DriverName
	I0224 13:28:58.275266  956077 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0224 13:28:58.275326  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHHostname
	I0224 13:28:58.275372  956077 ssh_runner.go:195] Run: cat /version.json
	I0224 13:28:58.275401  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHHostname
	I0224 13:28:58.278276  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:58.278682  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:28:58.278713  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:58.278732  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:58.278841  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHPort
	I0224 13:28:58.279035  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:28:58.279101  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:28:58.279129  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:58.279314  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHPort
	I0224 13:28:58.279344  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHUsername
	I0224 13:28:58.279459  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:28:58.279555  956077 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/newest-cni-651381/id_rsa Username:docker}
	I0224 13:28:58.279604  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHUsername
	I0224 13:28:58.279716  956077 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/newest-cni-651381/id_rsa Username:docker}
	I0224 13:28:58.363123  956077 ssh_runner.go:195] Run: systemctl --version
	I0224 13:28:58.385513  956077 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0224 13:28:58.537461  956077 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0224 13:28:58.543840  956077 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0224 13:28:58.543916  956077 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0224 13:28:58.562167  956077 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0224 13:28:58.562203  956077 start.go:495] detecting cgroup driver to use...
	I0224 13:28:58.562288  956077 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0224 13:28:58.580754  956077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0224 13:28:58.595609  956077 docker.go:217] disabling cri-docker service (if available) ...
	I0224 13:28:58.595684  956077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0224 13:28:58.610441  956077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0224 13:28:58.625512  956077 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0224 13:28:58.742160  956077 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0224 13:28:58.897257  956077 docker.go:233] disabling docker service ...
	I0224 13:28:58.897354  956077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0224 13:28:58.913053  956077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0224 13:28:58.927511  956077 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0224 13:28:59.078303  956077 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0224 13:28:59.190231  956077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0224 13:28:59.205007  956077 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 13:28:59.224899  956077 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0224 13:28:59.224959  956077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 13:28:59.235985  956077 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0224 13:28:59.236076  956077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 13:28:59.247262  956077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 13:28:59.258419  956077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 13:28:59.269559  956077 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0224 13:28:59.281485  956077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 13:28:59.293207  956077 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 13:28:59.312591  956077 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 13:28:59.324339  956077 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0224 13:28:59.334891  956077 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0224 13:28:59.334973  956077 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0224 13:28:59.349831  956077 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0224 13:28:59.360347  956077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 13:28:59.479779  956077 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0224 13:28:59.577405  956077 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0224 13:28:59.577519  956077 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0224 13:28:59.583030  956077 start.go:563] Will wait 60s for crictl version
	I0224 13:28:59.583098  956077 ssh_runner.go:195] Run: which crictl
	I0224 13:28:59.587159  956077 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0224 13:28:59.625913  956077 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0224 13:28:59.626017  956077 ssh_runner.go:195] Run: crio --version
	I0224 13:28:59.656040  956077 ssh_runner.go:195] Run: crio --version
	I0224 13:28:59.690484  956077 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0224 13:28:59.691655  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetIP
	I0224 13:28:59.694827  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:59.695279  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:28:59.695313  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:59.695529  956077 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0224 13:28:59.700214  956077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 13:28:59.714858  956077 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0224 13:28:59.716146  956077 kubeadm.go:883] updating cluster {Name:newest-cni-651381 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-6
51381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddr
ess: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0224 13:28:59.716344  956077 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0224 13:28:59.716441  956077 ssh_runner.go:195] Run: sudo crictl images --output json
	I0224 13:28:59.759022  956077 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0224 13:28:59.759106  956077 ssh_runner.go:195] Run: which lz4
	I0224 13:28:59.763641  956077 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0224 13:28:59.768063  956077 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0224 13:28:59.768104  956077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0224 13:29:01.313361  956077 crio.go:462] duration metric: took 1.549763964s to copy over tarball
	I0224 13:29:01.313502  956077 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0224 13:29:03.649181  956077 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.335640797s)
	I0224 13:29:03.649213  956077 crio.go:469] duration metric: took 2.335814633s to extract the tarball
	I0224 13:29:03.649221  956077 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0224 13:29:03.687968  956077 ssh_runner.go:195] Run: sudo crictl images --output json
	I0224 13:29:03.741442  956077 crio.go:514] all images are preloaded for cri-o runtime.
	I0224 13:29:03.741478  956077 cache_images.go:84] Images are preloaded, skipping loading
	I0224 13:29:03.741490  956077 kubeadm.go:934] updating node { 192.168.39.43 8443 v1.32.2 crio true true} ...
	I0224 13:29:03.741662  956077 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-651381 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.43
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:newest-cni-651381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0224 13:29:03.741787  956077 ssh_runner.go:195] Run: crio config
	I0224 13:29:03.799716  956077 cni.go:84] Creating CNI manager for ""
	I0224 13:29:03.799747  956077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0224 13:29:03.799764  956077 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0224 13:29:03.799794  956077 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.43 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-651381 NodeName:newest-cni-651381 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.43"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.43 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0224 13:29:03.799960  956077 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.43
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-651381"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.43"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.43"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0224 13:29:03.800042  956077 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0224 13:29:03.811912  956077 binaries.go:44] Found k8s binaries, skipping transfer
	I0224 13:29:03.812012  956077 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0224 13:29:03.823338  956077 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0224 13:29:03.842685  956077 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0224 13:29:03.861976  956077 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0224 13:29:03.882258  956077 ssh_runner.go:195] Run: grep 192.168.39.43	control-plane.minikube.internal$ /etc/hosts
	I0224 13:29:03.887084  956077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.43	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 13:29:03.902004  956077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 13:29:04.052713  956077 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0224 13:29:04.071828  956077 certs.go:68] Setting up /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/newest-cni-651381 for IP: 192.168.39.43
	I0224 13:29:04.071866  956077 certs.go:194] generating shared ca certs ...
	I0224 13:29:04.071893  956077 certs.go:226] acquiring lock for ca certs: {Name:mk38777c6b180f63d1816020cff79a01106ddf33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 13:29:04.072105  956077 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20451-887294/.minikube/ca.key
	I0224 13:29:04.072202  956077 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20451-887294/.minikube/proxy-client-ca.key
	I0224 13:29:04.072219  956077 certs.go:256] generating profile certs ...
	I0224 13:29:04.072346  956077 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/newest-cni-651381/client.key
	I0224 13:29:04.072430  956077 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/newest-cni-651381/apiserver.key.5ef52652
	I0224 13:29:04.072487  956077 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/newest-cni-651381/proxy-client.key
	I0224 13:29:04.072689  956077 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/894564.pem (1338 bytes)
	W0224 13:29:04.072726  956077 certs.go:480] ignoring /home/jenkins/minikube-integration/20451-887294/.minikube/certs/894564_empty.pem, impossibly tiny 0 bytes
	I0224 13:29:04.072737  956077 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca-key.pem (1679 bytes)
	I0224 13:29:04.072760  956077 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem (1082 bytes)
	I0224 13:29:04.072785  956077 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/cert.pem (1123 bytes)
	I0224 13:29:04.072809  956077 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/key.pem (1679 bytes)
	I0224 13:29:04.072844  956077 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/files/etc/ssl/certs/8945642.pem (1708 bytes)
	I0224 13:29:04.073566  956077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0224 13:29:04.112077  956077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0224 13:29:04.149068  956077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0224 13:29:04.179616  956077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0224 13:29:04.209417  956077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/newest-cni-651381/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0224 13:29:04.245961  956077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/newest-cni-651381/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0224 13:29:04.279758  956077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/newest-cni-651381/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0224 13:29:04.306976  956077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/newest-cni-651381/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0224 13:29:04.334286  956077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0224 13:29:04.361320  956077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/certs/894564.pem --> /usr/share/ca-certificates/894564.pem (1338 bytes)
	I0224 13:29:04.387966  956077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/files/etc/ssl/certs/8945642.pem --> /usr/share/ca-certificates/8945642.pem (1708 bytes)
	I0224 13:29:04.414747  956077 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0224 13:29:04.433921  956077 ssh_runner.go:195] Run: openssl version
	I0224 13:29:04.440667  956077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8945642.pem && ln -fs /usr/share/ca-certificates/8945642.pem /etc/ssl/certs/8945642.pem"
	I0224 13:29:04.453454  956077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8945642.pem
	I0224 13:29:04.459040  956077 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 24 12:09 /usr/share/ca-certificates/8945642.pem
	I0224 13:29:04.459108  956077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8945642.pem
	I0224 13:29:04.466078  956077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8945642.pem /etc/ssl/certs/3ec20f2e.0"
	I0224 13:29:04.478970  956077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0224 13:29:04.491228  956077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0224 13:29:04.496708  956077 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 24 12:01 /usr/share/ca-certificates/minikubeCA.pem
	I0224 13:29:04.496771  956077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0224 13:29:04.503067  956077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0224 13:29:04.515240  956077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/894564.pem && ln -fs /usr/share/ca-certificates/894564.pem /etc/ssl/certs/894564.pem"
	I0224 13:29:04.527524  956077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/894564.pem
	I0224 13:29:04.532779  956077 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 24 12:09 /usr/share/ca-certificates/894564.pem
	I0224 13:29:04.532845  956077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/894564.pem
	I0224 13:29:04.539425  956077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/894564.pem /etc/ssl/certs/51391683.0"
	I0224 13:29:04.551398  956077 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0224 13:29:04.556720  956077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0224 13:29:04.566700  956077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0224 13:29:04.573865  956077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0224 13:29:04.580856  956077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0224 13:29:04.588174  956077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0224 13:29:04.595837  956077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0224 13:29:04.603384  956077 kubeadm.go:392] StartCluster: {Name:newest-cni-651381 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-6513
81 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress
: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0224 13:29:04.603508  956077 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0224 13:29:04.603592  956077 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0224 13:29:04.647022  956077 cri.go:89] found id: ""
	I0224 13:29:04.647118  956077 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0224 13:29:04.658566  956077 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0224 13:29:04.658595  956077 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0224 13:29:04.658664  956077 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0224 13:29:04.669446  956077 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0224 13:29:04.670107  956077 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-651381" does not appear in /home/jenkins/minikube-integration/20451-887294/kubeconfig
	I0224 13:29:04.670340  956077 kubeconfig.go:62] /home/jenkins/minikube-integration/20451-887294/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-651381" cluster setting kubeconfig missing "newest-cni-651381" context setting]
	I0224 13:29:04.670763  956077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-887294/kubeconfig: {Name:mk0122b69f41cd40d5267f436266ccce22ce5ef2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 13:29:04.703477  956077 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0224 13:29:04.714783  956077 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.43
	I0224 13:29:04.714826  956077 kubeadm.go:1160] stopping kube-system containers ...
	I0224 13:29:04.714856  956077 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0224 13:29:04.714926  956077 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0224 13:29:04.753447  956077 cri.go:89] found id: ""
	I0224 13:29:04.753549  956077 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0224 13:29:04.771436  956077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0224 13:29:04.782526  956077 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0224 13:29:04.782550  956077 kubeadm.go:157] found existing configuration files:
	
	I0224 13:29:04.782599  956077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0224 13:29:04.793248  956077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0224 13:29:04.793349  956077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0224 13:29:04.804033  956077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0224 13:29:04.814167  956077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0224 13:29:04.814256  956077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0224 13:29:04.824390  956077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0224 13:29:04.835928  956077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0224 13:29:04.836009  956077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0224 13:29:04.846849  956077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0224 13:29:04.857291  956077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0224 13:29:04.857371  956077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0224 13:29:04.868432  956077 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0224 13:29:04.879429  956077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0224 13:29:05.016556  956077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0224 13:29:05.855312  956077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0224 13:29:06.068970  956077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0224 13:29:06.138545  956077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0224 13:29:06.252222  956077 api_server.go:52] waiting for apiserver process to appear ...
	I0224 13:29:06.252315  956077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:29:06.752623  956077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:29:07.253475  956077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:29:07.273087  956077 api_server.go:72] duration metric: took 1.020861784s to wait for apiserver process to appear ...
	I0224 13:29:07.273129  956077 api_server.go:88] waiting for apiserver healthz status ...
	I0224 13:29:07.273156  956077 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I0224 13:29:07.273777  956077 api_server.go:269] stopped: https://192.168.39.43:8443/healthz: Get "https://192.168.39.43:8443/healthz": dial tcp 192.168.39.43:8443: connect: connection refused
	I0224 13:29:07.773461  956077 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I0224 13:29:10.395720  956077 api_server.go:279] https://192.168.39.43:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0224 13:29:10.395756  956077 api_server.go:103] status: https://192.168.39.43:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0224 13:29:10.395777  956077 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I0224 13:29:10.424020  956077 api_server.go:279] https://192.168.39.43:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0224 13:29:10.424060  956077 api_server.go:103] status: https://192.168.39.43:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0224 13:29:10.773537  956077 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I0224 13:29:10.778715  956077 api_server.go:279] https://192.168.39.43:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0224 13:29:10.778749  956077 api_server.go:103] status: https://192.168.39.43:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0224 13:29:11.273360  956077 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I0224 13:29:11.282850  956077 api_server.go:279] https://192.168.39.43:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0224 13:29:11.282888  956077 api_server.go:103] status: https://192.168.39.43:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0224 13:29:11.773530  956077 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I0224 13:29:11.782399  956077 api_server.go:279] https://192.168.39.43:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0224 13:29:11.782431  956077 api_server.go:103] status: https://192.168.39.43:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0224 13:29:12.274112  956077 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I0224 13:29:12.279760  956077 api_server.go:279] https://192.168.39.43:8443/healthz returned 200:
	ok
	I0224 13:29:12.286489  956077 api_server.go:141] control plane version: v1.32.2
	I0224 13:29:12.286522  956077 api_server.go:131] duration metric: took 5.013385837s to wait for apiserver health ...
	I0224 13:29:12.286533  956077 cni.go:84] Creating CNI manager for ""
	I0224 13:29:12.286540  956077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0224 13:29:12.288455  956077 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0224 13:29:12.289765  956077 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0224 13:29:12.302198  956077 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0224 13:29:12.341287  956077 system_pods.go:43] waiting for kube-system pods to appear ...
	I0224 13:29:12.353152  956077 system_pods.go:59] 8 kube-system pods found
	I0224 13:29:12.353227  956077 system_pods.go:61] "coredns-668d6bf9bc-5fzqg" [081ec828-51bc-43dd-8eb5-50027cd1e5ce] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0224 13:29:12.353242  956077 system_pods.go:61] "etcd-newest-cni-651381" [49ed84ef-a3f9-41e6-969d-9c36df52bd1e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0224 13:29:12.353256  956077 system_pods.go:61] "kube-apiserver-newest-cni-651381" [3fc7c3f3-60dd-4be5-83d3-43fff952ccb0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0224 13:29:12.353266  956077 system_pods.go:61] "kube-controller-manager-newest-cni-651381" [f24e71f1-80e9-408a-b3d9-ad900b5e1955] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0224 13:29:12.353282  956077 system_pods.go:61] "kube-proxy-lh4cg" [024a70db-68c8-4faf-9072-9957034b592a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0224 13:29:12.353292  956077 system_pods.go:61] "kube-scheduler-newest-cni-651381" [9afed0fd-e49a-4d28-9504-1562a04fbb7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0224 13:29:12.353335  956077 system_pods.go:61] "metrics-server-f79f97bbb-zcgjt" [6afaa917-e3b5-4c04-8853-4936ba182e4b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0224 13:29:12.353346  956077 system_pods.go:61] "storage-provisioner" [dd4ee237-b34c-481b-8a9d-ff296eca352b] Running
	I0224 13:29:12.353359  956077 system_pods.go:74] duration metric: took 12.029012ms to wait for pod list to return data ...
	I0224 13:29:12.353373  956077 node_conditions.go:102] verifying NodePressure condition ...
	I0224 13:29:12.364913  956077 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0224 13:29:12.364957  956077 node_conditions.go:123] node cpu capacity is 2
	I0224 13:29:12.364975  956077 node_conditions.go:105] duration metric: took 11.585246ms to run NodePressure ...
	I0224 13:29:12.365016  956077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0224 13:29:12.738521  956077 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0224 13:29:12.751756  956077 ops.go:34] apiserver oom_adj: -16
	I0224 13:29:12.751784  956077 kubeadm.go:597] duration metric: took 8.093182521s to restartPrimaryControlPlane
	I0224 13:29:12.751797  956077 kubeadm.go:394] duration metric: took 8.148429756s to StartCluster
	I0224 13:29:12.751815  956077 settings.go:142] acquiring lock: {Name:mk663e441d32b04abcccdab86db3e15276e74de8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 13:29:12.751904  956077 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20451-887294/kubeconfig
	I0224 13:29:12.752732  956077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-887294/kubeconfig: {Name:mk0122b69f41cd40d5267f436266ccce22ce5ef2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 13:29:12.753015  956077 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0224 13:29:12.753115  956077 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0224 13:29:12.753237  956077 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-651381"
	I0224 13:29:12.753262  956077 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-651381"
	W0224 13:29:12.753270  956077 addons.go:247] addon storage-provisioner should already be in state true
	I0224 13:29:12.753272  956077 addons.go:69] Setting default-storageclass=true in profile "newest-cni-651381"
	I0224 13:29:12.753291  956077 config.go:182] Loaded profile config "newest-cni-651381": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0224 13:29:12.753300  956077 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-651381"
	I0224 13:29:12.753324  956077 host.go:66] Checking if "newest-cni-651381" exists ...
	I0224 13:29:12.753334  956077 addons.go:69] Setting dashboard=true in profile "newest-cni-651381"
	I0224 13:29:12.753345  956077 addons.go:69] Setting metrics-server=true in profile "newest-cni-651381"
	I0224 13:29:12.753365  956077 addons.go:238] Setting addon dashboard=true in "newest-cni-651381"
	I0224 13:29:12.753372  956077 addons.go:238] Setting addon metrics-server=true in "newest-cni-651381"
	W0224 13:29:12.753382  956077 addons.go:247] addon dashboard should already be in state true
	W0224 13:29:12.753389  956077 addons.go:247] addon metrics-server should already be in state true
	I0224 13:29:12.753419  956077 host.go:66] Checking if "newest-cni-651381" exists ...
	I0224 13:29:12.753424  956077 host.go:66] Checking if "newest-cni-651381" exists ...
	I0224 13:29:12.753799  956077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:29:12.753809  956077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:29:12.753844  956077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:29:12.753852  956077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:29:12.753859  956077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:29:12.753877  956077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:29:12.753896  956077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:29:12.753907  956077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:29:12.756327  956077 out.go:177] * Verifying Kubernetes components...
	I0224 13:29:12.757988  956077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 13:29:12.770827  956077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41209
	I0224 13:29:12.771035  956077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34785
	I0224 13:29:12.771532  956077 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:29:12.771609  956077 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:29:12.772161  956077 main.go:141] libmachine: Using API Version  1
	I0224 13:29:12.772186  956077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:29:12.772228  956077 main.go:141] libmachine: Using API Version  1
	I0224 13:29:12.772250  956077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:29:12.772280  956077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46419
	I0224 13:29:12.772345  956077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39593
	I0224 13:29:12.772705  956077 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:29:12.772733  956077 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:29:12.772777  956077 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:29:12.772856  956077 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:29:12.772908  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetState
	I0224 13:29:12.773495  956077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:29:12.773541  956077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:29:12.773925  956077 main.go:141] libmachine: Using API Version  1
	I0224 13:29:12.773937  956077 main.go:141] libmachine: Using API Version  1
	I0224 13:29:12.773948  956077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:29:12.773953  956077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:29:12.774427  956077 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:29:12.774735  956077 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:29:12.775094  956077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:29:12.775132  956077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:29:12.775346  956077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:29:12.775386  956077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:29:12.790773  956077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38083
	I0224 13:29:12.791279  956077 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:29:12.791520  956077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37107
	I0224 13:29:12.791793  956077 main.go:141] libmachine: Using API Version  1
	I0224 13:29:12.791815  956077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:29:12.792028  956077 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:29:12.792228  956077 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:29:12.792458  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetState
	I0224 13:29:12.792693  956077 main.go:141] libmachine: Using API Version  1
	I0224 13:29:12.792728  956077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:29:12.793147  956077 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:29:12.793354  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetState
	I0224 13:29:12.794339  956077 main.go:141] libmachine: (newest-cni-651381) Calling .DriverName
	I0224 13:29:12.795159  956077 main.go:141] libmachine: (newest-cni-651381) Calling .DriverName
	I0224 13:29:12.796980  956077 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0224 13:29:12.797044  956077 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0224 13:29:12.798873  956077 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0224 13:29:12.798897  956077 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0224 13:29:12.798924  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHHostname
	I0224 13:29:12.799025  956077 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0224 13:29:12.800379  956077 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0224 13:29:12.800413  956077 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0224 13:29:12.800444  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHHostname
	I0224 13:29:12.802889  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:29:12.803112  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:29:12.803154  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:29:12.803253  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHPort
	I0224 13:29:12.803514  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:29:12.803684  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHUsername
	I0224 13:29:12.803835  956077 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/newest-cni-651381/id_rsa Username:docker}
	I0224 13:29:12.804218  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:29:12.804781  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:29:12.804865  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:29:12.804986  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHPort
	I0224 13:29:12.805169  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:29:12.805331  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHUsername
	I0224 13:29:12.805504  956077 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/newest-cni-651381/id_rsa Username:docker}
	I0224 13:29:12.805863  956077 addons.go:238] Setting addon default-storageclass=true in "newest-cni-651381"
	W0224 13:29:12.805886  956077 addons.go:247] addon default-storageclass should already be in state true
	I0224 13:29:12.805921  956077 host.go:66] Checking if "newest-cni-651381" exists ...
	I0224 13:29:12.806263  956077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:29:12.806310  956077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:29:12.822073  956077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36359
	I0224 13:29:12.822078  956077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33077
	I0224 13:29:12.822532  956077 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:29:12.822608  956077 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:29:12.823097  956077 main.go:141] libmachine: Using API Version  1
	I0224 13:29:12.823120  956077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:29:12.823190  956077 main.go:141] libmachine: Using API Version  1
	I0224 13:29:12.823208  956077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:29:12.823472  956077 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:29:12.823587  956077 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:29:12.823766  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetState
	I0224 13:29:12.824054  956077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:29:12.824092  956077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:29:12.825722  956077 main.go:141] libmachine: (newest-cni-651381) Calling .DriverName
	I0224 13:29:12.827968  956077 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0224 13:29:12.829697  956077 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0224 13:29:12.829721  956077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0224 13:29:12.829743  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHHostname
	I0224 13:29:12.833829  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:29:12.834243  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:29:12.834272  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:29:12.834576  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHPort
	I0224 13:29:12.834868  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:29:12.835030  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHUsername
	I0224 13:29:12.835176  956077 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/newest-cni-651381/id_rsa Username:docker}
	I0224 13:29:12.841346  956077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45733
	I0224 13:29:12.841788  956077 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:29:12.842314  956077 main.go:141] libmachine: Using API Version  1
	I0224 13:29:12.842345  956077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:29:12.842757  956077 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:29:12.842974  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetState
	I0224 13:29:12.844679  956077 main.go:141] libmachine: (newest-cni-651381) Calling .DriverName
	I0224 13:29:12.844903  956077 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0224 13:29:12.844923  956077 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0224 13:29:12.844944  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHHostname
	I0224 13:29:12.847773  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:29:12.848236  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:29:12.848274  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:29:12.848424  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHPort
	I0224 13:29:12.848652  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:29:12.848819  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHUsername
	I0224 13:29:12.848952  956077 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/newest-cni-651381/id_rsa Username:docker}
	I0224 13:29:12.994330  956077 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0224 13:29:13.013328  956077 api_server.go:52] waiting for apiserver process to appear ...
	I0224 13:29:13.013419  956077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:29:13.031907  956077 api_server.go:72] duration metric: took 278.851886ms to wait for apiserver process to appear ...
	I0224 13:29:13.031946  956077 api_server.go:88] waiting for apiserver healthz status ...
	I0224 13:29:13.031974  956077 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I0224 13:29:13.037741  956077 api_server.go:279] https://192.168.39.43:8443/healthz returned 200:
	ok
	I0224 13:29:13.038717  956077 api_server.go:141] control plane version: v1.32.2
	I0224 13:29:13.038740  956077 api_server.go:131] duration metric: took 6.786687ms to wait for apiserver health ...
	I0224 13:29:13.038749  956077 system_pods.go:43] waiting for kube-system pods to appear ...
	I0224 13:29:13.041638  956077 system_pods.go:59] 8 kube-system pods found
	I0224 13:29:13.041677  956077 system_pods.go:61] "coredns-668d6bf9bc-5fzqg" [081ec828-51bc-43dd-8eb5-50027cd1e5ce] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0224 13:29:13.041689  956077 system_pods.go:61] "etcd-newest-cni-651381" [49ed84ef-a3f9-41e6-969d-9c36df52bd1e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0224 13:29:13.041699  956077 system_pods.go:61] "kube-apiserver-newest-cni-651381" [3fc7c3f3-60dd-4be5-83d3-43fff952ccb0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0224 13:29:13.041707  956077 system_pods.go:61] "kube-controller-manager-newest-cni-651381" [f24e71f1-80e9-408a-b3d9-ad900b5e1955] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0224 13:29:13.041713  956077 system_pods.go:61] "kube-proxy-lh4cg" [024a70db-68c8-4faf-9072-9957034b592a] Running
	I0224 13:29:13.041723  956077 system_pods.go:61] "kube-scheduler-newest-cni-651381" [9afed0fd-e49a-4d28-9504-1562a04fbb7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0224 13:29:13.041734  956077 system_pods.go:61] "metrics-server-f79f97bbb-zcgjt" [6afaa917-e3b5-4c04-8853-4936ba182e4b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0224 13:29:13.041744  956077 system_pods.go:61] "storage-provisioner" [dd4ee237-b34c-481b-8a9d-ff296eca352b] Running
	I0224 13:29:13.041755  956077 system_pods.go:74] duration metric: took 2.998451ms to wait for pod list to return data ...
	I0224 13:29:13.041769  956077 default_sa.go:34] waiting for default service account to be created ...
	I0224 13:29:13.045370  956077 default_sa.go:45] found service account: "default"
	I0224 13:29:13.045406  956077 default_sa.go:55] duration metric: took 3.628344ms for default service account to be created ...
	I0224 13:29:13.045423  956077 kubeadm.go:582] duration metric: took 292.373047ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0224 13:29:13.045461  956077 node_conditions.go:102] verifying NodePressure condition ...
	I0224 13:29:13.048412  956077 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0224 13:29:13.048450  956077 node_conditions.go:123] node cpu capacity is 2
	I0224 13:29:13.048465  956077 node_conditions.go:105] duration metric: took 2.99453ms to run NodePressure ...
	I0224 13:29:13.048482  956077 start.go:241] waiting for startup goroutines ...
	I0224 13:29:13.107171  956077 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0224 13:29:13.107201  956077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0224 13:29:13.119071  956077 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0224 13:29:13.119103  956077 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0224 13:29:13.134996  956077 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0224 13:29:13.135034  956077 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0224 13:29:13.155551  956077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0224 13:29:13.185957  956077 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0224 13:29:13.185995  956077 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0224 13:29:13.186048  956077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0224 13:29:13.188044  956077 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0224 13:29:13.188069  956077 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0224 13:29:13.231557  956077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0224 13:29:13.247560  956077 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0224 13:29:13.247593  956077 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0224 13:29:13.353680  956077 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0224 13:29:13.353706  956077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0224 13:29:13.453436  956077 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0224 13:29:13.453467  956077 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0224 13:29:13.612651  956077 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0224 13:29:13.612689  956077 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0224 13:29:13.761435  956077 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0224 13:29:13.761484  956077 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0224 13:29:11.224324  953268 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0224 13:29:11.225286  953268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:29:11.225572  953268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:29:13.875252  956077 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0224 13:29:13.875291  956077 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0224 13:29:13.988211  956077 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0224 13:29:13.988245  956077 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0224 13:29:14.040504  956077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0224 13:29:14.735719  956077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.580126907s)
	I0224 13:29:14.735772  956077 main.go:141] libmachine: Making call to close driver server
	I0224 13:29:14.735781  956077 main.go:141] libmachine: (newest-cni-651381) Calling .Close
	I0224 13:29:14.735890  956077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.549811056s)
	I0224 13:29:14.735948  956077 main.go:141] libmachine: Making call to close driver server
	I0224 13:29:14.735960  956077 main.go:141] libmachine: (newest-cni-651381) Calling .Close
	I0224 13:29:14.736196  956077 main.go:141] libmachine: (newest-cni-651381) DBG | Closing plugin on server side
	I0224 13:29:14.736226  956077 main.go:141] libmachine: Successfully made call to close driver server
	I0224 13:29:14.736242  956077 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 13:29:14.736258  956077 main.go:141] libmachine: Making call to close driver server
	I0224 13:29:14.736272  956077 main.go:141] libmachine: (newest-cni-651381) Calling .Close
	I0224 13:29:14.736296  956077 main.go:141] libmachine: Successfully made call to close driver server
	I0224 13:29:14.736311  956077 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 13:29:14.736321  956077 main.go:141] libmachine: Making call to close driver server
	I0224 13:29:14.736344  956077 main.go:141] libmachine: (newest-cni-651381) Calling .Close
	I0224 13:29:14.736595  956077 main.go:141] libmachine: Successfully made call to close driver server
	I0224 13:29:14.736611  956077 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 13:29:14.736658  956077 main.go:141] libmachine: (newest-cni-651381) DBG | Closing plugin on server side
	I0224 13:29:14.736872  956077 main.go:141] libmachine: Successfully made call to close driver server
	I0224 13:29:14.736892  956077 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 13:29:14.745116  956077 main.go:141] libmachine: Making call to close driver server
	I0224 13:29:14.745148  956077 main.go:141] libmachine: (newest-cni-651381) Calling .Close
	I0224 13:29:14.745492  956077 main.go:141] libmachine: Successfully made call to close driver server
	I0224 13:29:14.745517  956077 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 13:29:14.745526  956077 main.go:141] libmachine: (newest-cni-651381) DBG | Closing plugin on server side
	I0224 13:29:14.894767  956077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.663157775s)
	I0224 13:29:14.894851  956077 main.go:141] libmachine: Making call to close driver server
	I0224 13:29:14.894872  956077 main.go:141] libmachine: (newest-cni-651381) Calling .Close
	I0224 13:29:14.895200  956077 main.go:141] libmachine: Successfully made call to close driver server
	I0224 13:29:14.895223  956077 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 13:29:14.895234  956077 main.go:141] libmachine: Making call to close driver server
	I0224 13:29:14.895241  956077 main.go:141] libmachine: (newest-cni-651381) Calling .Close
	I0224 13:29:14.895512  956077 main.go:141] libmachine: Successfully made call to close driver server
	I0224 13:29:14.895531  956077 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 13:29:14.895543  956077 addons.go:479] Verifying addon metrics-server=true in "newest-cni-651381"
	I0224 13:29:15.529417  956077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.48881961s)
	I0224 13:29:15.529510  956077 main.go:141] libmachine: Making call to close driver server
	I0224 13:29:15.529526  956077 main.go:141] libmachine: (newest-cni-651381) Calling .Close
	I0224 13:29:15.529885  956077 main.go:141] libmachine: (newest-cni-651381) DBG | Closing plugin on server side
	I0224 13:29:15.529896  956077 main.go:141] libmachine: Successfully made call to close driver server
	I0224 13:29:15.529910  956077 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 13:29:15.529921  956077 main.go:141] libmachine: Making call to close driver server
	I0224 13:29:15.529930  956077 main.go:141] libmachine: (newest-cni-651381) Calling .Close
	I0224 13:29:15.530216  956077 main.go:141] libmachine: Successfully made call to close driver server
	I0224 13:29:15.530235  956077 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 13:29:15.532337  956077 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-651381 addons enable metrics-server
	
	I0224 13:29:15.534011  956077 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0224 13:29:15.535543  956077 addons.go:514] duration metric: took 2.78244386s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0224 13:29:15.535586  956077 start.go:246] waiting for cluster config update ...
	I0224 13:29:15.535599  956077 start.go:255] writing updated cluster config ...
	I0224 13:29:15.535868  956077 ssh_runner.go:195] Run: rm -f paused
	I0224 13:29:15.604806  956077 start.go:600] kubectl: 1.32.2, cluster: 1.32.2 (minor skew: 0)
	I0224 13:29:15.606756  956077 out.go:177] * Done! kubectl is now configured to use "newest-cni-651381" cluster and "default" namespace by default
	I0224 13:29:16.226144  953268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:29:16.226358  953268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:29:26.227187  953268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:29:26.227476  953268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:29:46.228012  953268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:29:46.228297  953268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:30:26.229952  953268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:30:26.230229  953268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:30:26.230260  953268 kubeadm.go:310] 
	I0224 13:30:26.230300  953268 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0224 13:30:26.230364  953268 kubeadm.go:310] 		timed out waiting for the condition
	I0224 13:30:26.230392  953268 kubeadm.go:310] 
	I0224 13:30:26.230441  953268 kubeadm.go:310] 	This error is likely caused by:
	I0224 13:30:26.230505  953268 kubeadm.go:310] 		- The kubelet is not running
	I0224 13:30:26.230648  953268 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0224 13:30:26.230661  953268 kubeadm.go:310] 
	I0224 13:30:26.230806  953268 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0224 13:30:26.230857  953268 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0224 13:30:26.230902  953268 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0224 13:30:26.230911  953268 kubeadm.go:310] 
	I0224 13:30:26.231038  953268 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0224 13:30:26.231147  953268 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0224 13:30:26.231163  953268 kubeadm.go:310] 
	I0224 13:30:26.231301  953268 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0224 13:30:26.231435  953268 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0224 13:30:26.231545  953268 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0224 13:30:26.231657  953268 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0224 13:30:26.231675  953268 kubeadm.go:310] 
	I0224 13:30:26.232473  953268 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0224 13:30:26.232591  953268 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0224 13:30:26.232710  953268 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0224 13:30:26.232936  953268 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0224 13:30:26.232991  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0224 13:30:26.704666  953268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 13:30:26.720451  953268 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0224 13:30:26.732280  953268 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0224 13:30:26.732306  953268 kubeadm.go:157] found existing configuration files:
	
	I0224 13:30:26.732371  953268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0224 13:30:26.743971  953268 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0224 13:30:26.744050  953268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0224 13:30:26.755216  953268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0224 13:30:26.766460  953268 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0224 13:30:26.766542  953268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0224 13:30:26.778117  953268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0224 13:30:26.789142  953268 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0224 13:30:26.789208  953268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0224 13:30:26.800621  953268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0224 13:30:26.811672  953268 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0224 13:30:26.811755  953268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0224 13:30:26.823061  953268 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0224 13:30:27.039614  953268 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0224 13:32:23.115672  953268 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0224 13:32:23.115858  953268 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0224 13:32:23.117520  953268 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0224 13:32:23.117626  953268 kubeadm.go:310] [preflight] Running pre-flight checks
	I0224 13:32:23.117831  953268 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0224 13:32:23.118008  953268 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0224 13:32:23.118171  953268 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0224 13:32:23.118281  953268 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0224 13:32:23.120434  953268 out.go:235]   - Generating certificates and keys ...
	I0224 13:32:23.120529  953268 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0224 13:32:23.120621  953268 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0224 13:32:23.120736  953268 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0224 13:32:23.120819  953268 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0224 13:32:23.120905  953268 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0224 13:32:23.120957  953268 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0224 13:32:23.121011  953268 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0224 13:32:23.121066  953268 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0224 13:32:23.121134  953268 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0224 13:32:23.121202  953268 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0224 13:32:23.121237  953268 kubeadm.go:310] [certs] Using the existing "sa" key
	I0224 13:32:23.121355  953268 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0224 13:32:23.121422  953268 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0224 13:32:23.121526  953268 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0224 13:32:23.121602  953268 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0224 13:32:23.121654  953268 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0224 13:32:23.121775  953268 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0224 13:32:23.121914  953268 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0224 13:32:23.121964  953268 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0224 13:32:23.122028  953268 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0224 13:32:23.123732  953268 out.go:235]   - Booting up control plane ...
	I0224 13:32:23.123835  953268 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0224 13:32:23.123904  953268 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0224 13:32:23.123986  953268 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0224 13:32:23.124096  953268 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0224 13:32:23.124279  953268 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0224 13:32:23.124332  953268 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0224 13:32:23.124401  953268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:32:23.124595  953268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:32:23.124691  953268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:32:23.124893  953268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:32:23.124960  953268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:32:23.125150  953268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:32:23.125220  953268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:32:23.125409  953268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:32:23.125508  953268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:32:23.125791  953268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:32:23.125817  953268 kubeadm.go:310] 
	I0224 13:32:23.125871  953268 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0224 13:32:23.125925  953268 kubeadm.go:310] 		timed out waiting for the condition
	I0224 13:32:23.125935  953268 kubeadm.go:310] 
	I0224 13:32:23.125985  953268 kubeadm.go:310] 	This error is likely caused by:
	I0224 13:32:23.126040  953268 kubeadm.go:310] 		- The kubelet is not running
	I0224 13:32:23.126194  953268 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0224 13:32:23.126222  953268 kubeadm.go:310] 
	I0224 13:32:23.126328  953268 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0224 13:32:23.126364  953268 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0224 13:32:23.126411  953268 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0224 13:32:23.126421  953268 kubeadm.go:310] 
	I0224 13:32:23.126543  953268 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0224 13:32:23.126655  953268 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0224 13:32:23.126665  953268 kubeadm.go:310] 
	I0224 13:32:23.126777  953268 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0224 13:32:23.126856  953268 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0224 13:32:23.126925  953268 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0224 13:32:23.127003  953268 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0224 13:32:23.127087  953268 kubeadm.go:310] 
	I0224 13:32:23.127095  953268 kubeadm.go:394] duration metric: took 7m58.850238597s to StartCluster
	I0224 13:32:23.127168  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:32:23.127245  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:32:23.173206  953268 cri.go:89] found id: ""
	I0224 13:32:23.173252  953268 logs.go:282] 0 containers: []
	W0224 13:32:23.173265  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:32:23.173274  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:32:23.173355  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:32:23.220974  953268 cri.go:89] found id: ""
	I0224 13:32:23.221008  953268 logs.go:282] 0 containers: []
	W0224 13:32:23.221017  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:32:23.221024  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:32:23.221095  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:32:23.256282  953268 cri.go:89] found id: ""
	I0224 13:32:23.256316  953268 logs.go:282] 0 containers: []
	W0224 13:32:23.256327  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:32:23.256335  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:32:23.256423  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:32:23.292296  953268 cri.go:89] found id: ""
	I0224 13:32:23.292329  953268 logs.go:282] 0 containers: []
	W0224 13:32:23.292340  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:32:23.292355  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:32:23.292422  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:32:23.328368  953268 cri.go:89] found id: ""
	I0224 13:32:23.328399  953268 logs.go:282] 0 containers: []
	W0224 13:32:23.328408  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:32:23.328414  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:32:23.328488  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:32:23.380963  953268 cri.go:89] found id: ""
	I0224 13:32:23.380995  953268 logs.go:282] 0 containers: []
	W0224 13:32:23.381005  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:32:23.381014  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:32:23.381083  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:32:23.448170  953268 cri.go:89] found id: ""
	I0224 13:32:23.448206  953268 logs.go:282] 0 containers: []
	W0224 13:32:23.448219  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:32:23.448227  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:32:23.448301  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:32:23.494938  953268 cri.go:89] found id: ""
	I0224 13:32:23.494969  953268 logs.go:282] 0 containers: []
	W0224 13:32:23.494978  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:32:23.494989  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:32:23.495004  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:32:23.545770  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:32:23.545817  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:32:23.561559  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:32:23.561608  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:32:23.639942  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:32:23.639969  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:32:23.639983  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:32:23.748671  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:32:23.748715  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0224 13:32:23.790465  953268 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0224 13:32:23.790543  953268 out.go:270] * 
	W0224 13:32:23.790632  953268 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0224 13:32:23.790650  953268 out.go:270] * 
	W0224 13:32:23.791585  953268 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0224 13:32:23.796216  953268 out.go:201] 
	W0224 13:32:23.797430  953268 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0224 13:32:23.797505  953268 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0224 13:32:23.797547  953268 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0224 13:32:23.799102  953268 out.go:201] 
	
	
	==> CRI-O <==
	Feb 24 13:32:24 old-k8s-version-233759 crio[626]: time="2025-02-24 13:32:24.818034452Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1740403944817995810,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8ff322bc-112e-4b52-9ca0-2b91f9275193 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 13:32:24 old-k8s-version-233759 crio[626]: time="2025-02-24 13:32:24.818993637Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=26fa9ae3-8578-4a11-a6b6-7b89e31d92ef name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:32:24 old-k8s-version-233759 crio[626]: time="2025-02-24 13:32:24.819043727Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=26fa9ae3-8578-4a11-a6b6-7b89e31d92ef name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:32:24 old-k8s-version-233759 crio[626]: time="2025-02-24 13:32:24.819088515Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=26fa9ae3-8578-4a11-a6b6-7b89e31d92ef name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:32:24 old-k8s-version-233759 crio[626]: time="2025-02-24 13:32:24.857424701Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4d04e531-46eb-4532-8b60-6946863b0a33 name=/runtime.v1.RuntimeService/Version
	Feb 24 13:32:24 old-k8s-version-233759 crio[626]: time="2025-02-24 13:32:24.857499554Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4d04e531-46eb-4532-8b60-6946863b0a33 name=/runtime.v1.RuntimeService/Version
	Feb 24 13:32:24 old-k8s-version-233759 crio[626]: time="2025-02-24 13:32:24.859512813Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c6265193-a5eb-4f9c-a1d9-f14c7ed7e892 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 13:32:24 old-k8s-version-233759 crio[626]: time="2025-02-24 13:32:24.859997578Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1740403944859971867,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c6265193-a5eb-4f9c-a1d9-f14c7ed7e892 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 13:32:24 old-k8s-version-233759 crio[626]: time="2025-02-24 13:32:24.860652136Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=02848514-728f-480f-99e2-b19fb2204aa3 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:32:24 old-k8s-version-233759 crio[626]: time="2025-02-24 13:32:24.860705353Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=02848514-728f-480f-99e2-b19fb2204aa3 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:32:24 old-k8s-version-233759 crio[626]: time="2025-02-24 13:32:24.860743038Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=02848514-728f-480f-99e2-b19fb2204aa3 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:32:24 old-k8s-version-233759 crio[626]: time="2025-02-24 13:32:24.898687788Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2c83f5e3-19d8-4f85-abc2-0fc661217b70 name=/runtime.v1.RuntimeService/Version
	Feb 24 13:32:24 old-k8s-version-233759 crio[626]: time="2025-02-24 13:32:24.898818958Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2c83f5e3-19d8-4f85-abc2-0fc661217b70 name=/runtime.v1.RuntimeService/Version
	Feb 24 13:32:24 old-k8s-version-233759 crio[626]: time="2025-02-24 13:32:24.900568129Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4095d63d-8031-450f-a196-f10c2fd9d97c name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 13:32:24 old-k8s-version-233759 crio[626]: time="2025-02-24 13:32:24.901075756Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1740403944901050873,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4095d63d-8031-450f-a196-f10c2fd9d97c name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 13:32:24 old-k8s-version-233759 crio[626]: time="2025-02-24 13:32:24.901697095Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=92e8451a-f40f-40cf-b439-d6c70b3dc3a2 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:32:24 old-k8s-version-233759 crio[626]: time="2025-02-24 13:32:24.901743488Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=92e8451a-f40f-40cf-b439-d6c70b3dc3a2 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:32:24 old-k8s-version-233759 crio[626]: time="2025-02-24 13:32:24.901837349Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=92e8451a-f40f-40cf-b439-d6c70b3dc3a2 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:32:24 old-k8s-version-233759 crio[626]: time="2025-02-24 13:32:24.939031530Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c2ceabe1-73c0-4212-a4eb-b9699554fbfd name=/runtime.v1.RuntimeService/Version
	Feb 24 13:32:24 old-k8s-version-233759 crio[626]: time="2025-02-24 13:32:24.939116419Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c2ceabe1-73c0-4212-a4eb-b9699554fbfd name=/runtime.v1.RuntimeService/Version
	Feb 24 13:32:24 old-k8s-version-233759 crio[626]: time="2025-02-24 13:32:24.940432443Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ccfe74da-daf3-436f-8460-7fe43fa99651 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 13:32:24 old-k8s-version-233759 crio[626]: time="2025-02-24 13:32:24.940991882Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1740403944940959024,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ccfe74da-daf3-436f-8460-7fe43fa99651 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 13:32:24 old-k8s-version-233759 crio[626]: time="2025-02-24 13:32:24.941832218Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=961fcd58-7efb-4ae0-87a2-866c1315a16a name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:32:24 old-k8s-version-233759 crio[626]: time="2025-02-24 13:32:24.941882942Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=961fcd58-7efb-4ae0-87a2-866c1315a16a name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:32:24 old-k8s-version-233759 crio[626]: time="2025-02-24 13:32:24.941919060Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=961fcd58-7efb-4ae0-87a2-866c1315a16a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Feb24 13:23] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054709] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042708] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Feb24 13:24] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.133792] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.074673] overlayfs: failed to resolve '/var/lib/containers/storage/overlay/opaque-bug-check3889635992/l1': -2
	[  +0.613262] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.739778] systemd-fstab-generator[553]: Ignoring "noauto" option for root device
	[  +0.062960] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072258] systemd-fstab-generator[565]: Ignoring "noauto" option for root device
	[  +0.214347] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.136588] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.281511] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +7.250646] systemd-fstab-generator[883]: Ignoring "noauto" option for root device
	[  +0.068712] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.282155] systemd-fstab-generator[1007]: Ignoring "noauto" option for root device
	[ +12.331271] kauditd_printk_skb: 46 callbacks suppressed
	[Feb24 13:28] systemd-fstab-generator[4979]: Ignoring "noauto" option for root device
	[Feb24 13:30] systemd-fstab-generator[5261]: Ignoring "noauto" option for root device
	[  +0.064430] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 13:32:25 up 8 min,  0 users,  load average: 0.09, 0.10, 0.07
	Linux old-k8s-version-233759 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Feb 24 13:32:22 old-k8s-version-233759 kubelet[5437]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Feb 24 13:32:22 old-k8s-version-233759 kubelet[5437]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Feb 24 13:32:22 old-k8s-version-233759 kubelet[5437]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Feb 24 13:32:22 old-k8s-version-233759 kubelet[5437]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000b8c6f0)
	Feb 24 13:32:22 old-k8s-version-233759 kubelet[5437]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Feb 24 13:32:22 old-k8s-version-233759 kubelet[5437]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000a47ef0, 0x4f0ac20, 0xc000cad5e0, 0x1, 0xc00009e0c0)
	Feb 24 13:32:22 old-k8s-version-233759 kubelet[5437]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Feb 24 13:32:22 old-k8s-version-233759 kubelet[5437]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0001f7180, 0xc00009e0c0)
	Feb 24 13:32:22 old-k8s-version-233759 kubelet[5437]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Feb 24 13:32:22 old-k8s-version-233759 kubelet[5437]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Feb 24 13:32:22 old-k8s-version-233759 kubelet[5437]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Feb 24 13:32:22 old-k8s-version-233759 kubelet[5437]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000dde720, 0xc000dc0ec0)
	Feb 24 13:32:22 old-k8s-version-233759 kubelet[5437]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Feb 24 13:32:22 old-k8s-version-233759 kubelet[5437]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Feb 24 13:32:22 old-k8s-version-233759 kubelet[5437]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Feb 24 13:32:22 old-k8s-version-233759 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 24 13:32:22 old-k8s-version-233759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 24 13:32:23 old-k8s-version-233759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Feb 24 13:32:23 old-k8s-version-233759 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 24 13:32:23 old-k8s-version-233759 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 24 13:32:23 old-k8s-version-233759 kubelet[5472]: I0224 13:32:23.468230    5472 server.go:416] Version: v1.20.0
	Feb 24 13:32:23 old-k8s-version-233759 kubelet[5472]: I0224 13:32:23.468585    5472 server.go:837] Client rotation is on, will bootstrap in background
	Feb 24 13:32:23 old-k8s-version-233759 kubelet[5472]: I0224 13:32:23.470684    5472 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 24 13:32:23 old-k8s-version-233759 kubelet[5472]: I0224 13:32:23.471709    5472 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Feb 24 13:32:23 old-k8s-version-233759 kubelet[5472]: W0224 13:32:23.471755    5472 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-233759 -n old-k8s-version-233759
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-233759 -n old-k8s-version-233759: exit status 2 (240.805959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-233759" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (510.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
E0224 13:32:30.948926  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/enable-default-cni-799329/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
E0224 13:32:34.487865  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/no-preload-956442/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
E0224 13:32:41.638816  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/default-k8s-diff-port-108648/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
E0224 13:32:43.833630  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/flannel-799329/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
E0224 13:33:28.218547  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/bridge-799329/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
E0224 13:33:54.809688  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/auto-799329/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
E0224 13:34:12.769487  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
E0224 13:34:50.624417  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/no-preload-956442/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
E0224 13:34:57.777860  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/default-k8s-diff-port-108648/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
E0224 13:35:17.876481  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/auto-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:35:18.329700  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/no-preload-956442/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
E0224 13:35:23.723385  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kindnet-799329/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
E0224 13:35:25.480524  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/default-k8s-diff-port-108648/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
E0224 13:36:04.068587  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/calico-799329/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
E0224 13:36:46.787411  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kindnet-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:36:46.849068  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/functional-892991/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:36:47.049832  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/custom-flannel-799329/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
E0224 13:37:27.133473  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/calico-799329/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
E0224 13:37:30.948749  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/enable-default-cni-799329/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
E0224 13:37:43.833136  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/flannel-799329/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
E0224 13:38:10.115941  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/custom-flannel-799329/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
E0224 13:38:28.218451  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/bridge-799329/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
E0224 13:38:54.013338  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/enable-default-cni-799329/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
E0224 13:38:54.808816  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/auto-799329/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
E0224 13:39:06.897788  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/flannel-799329/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
E0224 13:39:12.769807  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
E0224 13:39:50.623687  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/no-preload-956442/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:39:51.283985  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/bridge-799329/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
E0224 13:39:57.777575  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/default-k8s-diff-port-108648/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
E0224 13:40:23.723982  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kindnet-799329/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
E0224 13:41:04.068194  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/calico-799329/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-233759 -n old-k8s-version-233759
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-233759 -n old-k8s-version-233759: exit status 2 (243.739872ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "old-k8s-version-233759" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-233759 -n old-k8s-version-233759
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-233759 -n old-k8s-version-233759: exit status 2 (233.236655ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-233759 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| image   | embed-certs-037381 image list                          | embed-certs-037381           | jenkins | v1.35.0 | 24 Feb 25 13:27 UTC | 24 Feb 25 13:27 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-037381                                  | embed-certs-037381           | jenkins | v1.35.0 | 24 Feb 25 13:27 UTC | 24 Feb 25 13:27 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-037381                                  | embed-certs-037381           | jenkins | v1.35.0 | 24 Feb 25 13:27 UTC | 24 Feb 25 13:27 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-037381                                  | embed-certs-037381           | jenkins | v1.35.0 | 24 Feb 25 13:27 UTC | 24 Feb 25 13:27 UTC |
	| delete  | -p embed-certs-037381                                  | embed-certs-037381           | jenkins | v1.35.0 | 24 Feb 25 13:27 UTC | 24 Feb 25 13:27 UTC |
	| start   | -p newest-cni-651381 --memory=2200 --alsologtostderr   | newest-cni-651381            | jenkins | v1.35.0 | 24 Feb 25 13:27 UTC | 24 Feb 25 13:28 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| image   | no-preload-956442 image list                           | no-preload-956442            | jenkins | v1.35.0 | 24 Feb 25 13:27 UTC | 24 Feb 25 13:27 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-956442                                   | no-preload-956442            | jenkins | v1.35.0 | 24 Feb 25 13:27 UTC | 24 Feb 25 13:27 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-956442                                   | no-preload-956442            | jenkins | v1.35.0 | 24 Feb 25 13:27 UTC | 24 Feb 25 13:27 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-956442                                   | no-preload-956442            | jenkins | v1.35.0 | 24 Feb 25 13:27 UTC | 24 Feb 25 13:27 UTC |
	| delete  | -p no-preload-956442                                   | no-preload-956442            | jenkins | v1.35.0 | 24 Feb 25 13:27 UTC | 24 Feb 25 13:27 UTC |
	| image   | default-k8s-diff-port-108648                           | default-k8s-diff-port-108648 | jenkins | v1.35.0 | 24 Feb 25 13:27 UTC | 24 Feb 25 13:27 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-108648 | jenkins | v1.35.0 | 24 Feb 25 13:27 UTC | 24 Feb 25 13:27 UTC |
	|         | default-k8s-diff-port-108648                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-108648 | jenkins | v1.35.0 | 24 Feb 25 13:27 UTC | 24 Feb 25 13:27 UTC |
	|         | default-k8s-diff-port-108648                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-108648 | jenkins | v1.35.0 | 24 Feb 25 13:27 UTC | 24 Feb 25 13:27 UTC |
	|         | default-k8s-diff-port-108648                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-108648 | jenkins | v1.35.0 | 24 Feb 25 13:27 UTC | 24 Feb 25 13:27 UTC |
	|         | default-k8s-diff-port-108648                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-651381             | newest-cni-651381            | jenkins | v1.35.0 | 24 Feb 25 13:28 UTC | 24 Feb 25 13:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-651381                                   | newest-cni-651381            | jenkins | v1.35.0 | 24 Feb 25 13:28 UTC | 24 Feb 25 13:28 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-651381                  | newest-cni-651381            | jenkins | v1.35.0 | 24 Feb 25 13:28 UTC | 24 Feb 25 13:28 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-651381 --memory=2200 --alsologtostderr   | newest-cni-651381            | jenkins | v1.35.0 | 24 Feb 25 13:28 UTC | 24 Feb 25 13:29 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| image   | newest-cni-651381 image list                           | newest-cni-651381            | jenkins | v1.35.0 | 24 Feb 25 13:29 UTC | 24 Feb 25 13:29 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-651381                                   | newest-cni-651381            | jenkins | v1.35.0 | 24 Feb 25 13:29 UTC | 24 Feb 25 13:29 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-651381                                   | newest-cni-651381            | jenkins | v1.35.0 | 24 Feb 25 13:29 UTC | 24 Feb 25 13:29 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-651381                                   | newest-cni-651381            | jenkins | v1.35.0 | 24 Feb 25 13:29 UTC | 24 Feb 25 13:29 UTC |
	| delete  | -p newest-cni-651381                                   | newest-cni-651381            | jenkins | v1.35.0 | 24 Feb 25 13:29 UTC | 24 Feb 25 13:29 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/24 13:28:38
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0224 13:28:38.792971  956077 out.go:345] Setting OutFile to fd 1 ...
	I0224 13:28:38.793077  956077 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 13:28:38.793085  956077 out.go:358] Setting ErrFile to fd 2...
	I0224 13:28:38.793089  956077 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 13:28:38.793277  956077 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-887294/.minikube/bin
	I0224 13:28:38.793883  956077 out.go:352] Setting JSON to false
	I0224 13:28:38.794844  956077 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":11460,"bootTime":1740392259,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0224 13:28:38.794956  956077 start.go:139] virtualization: kvm guest
	I0224 13:28:38.797461  956077 out.go:177] * [newest-cni-651381] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0224 13:28:38.798901  956077 out.go:177]   - MINIKUBE_LOCATION=20451
	I0224 13:28:38.798939  956077 notify.go:220] Checking for updates...
	I0224 13:28:38.801509  956077 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 13:28:38.802725  956077 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20451-887294/kubeconfig
	I0224 13:28:38.804035  956077 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-887294/.minikube
	I0224 13:28:38.805462  956077 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0224 13:28:38.806731  956077 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0224 13:28:38.808519  956077 config.go:182] Loaded profile config "newest-cni-651381": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0224 13:28:38.808929  956077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:28:38.808983  956077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:28:38.824230  956077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33847
	I0224 13:28:38.824657  956077 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:28:38.825223  956077 main.go:141] libmachine: Using API Version  1
	I0224 13:28:38.825247  956077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:28:38.825706  956077 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:28:38.825963  956077 main.go:141] libmachine: (newest-cni-651381) Calling .DriverName
	I0224 13:28:38.826250  956077 driver.go:394] Setting default libvirt URI to qemu:///system
	I0224 13:28:38.826574  956077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:28:38.826623  956077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:28:38.841716  956077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37847
	I0224 13:28:38.842131  956077 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:28:38.842597  956077 main.go:141] libmachine: Using API Version  1
	I0224 13:28:38.842619  956077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:28:38.842935  956077 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:28:38.843142  956077 main.go:141] libmachine: (newest-cni-651381) Calling .DriverName
	I0224 13:28:38.879934  956077 out.go:177] * Using the kvm2 driver based on existing profile
	I0224 13:28:38.881238  956077 start.go:297] selected driver: kvm2
	I0224 13:28:38.881261  956077 start.go:901] validating driver "kvm2" against &{Name:newest-cni-651381 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:newest-cni-651381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPort
s:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0224 13:28:38.881430  956077 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0224 13:28:38.882088  956077 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 13:28:38.882170  956077 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20451-887294/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0224 13:28:38.897736  956077 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0224 13:28:38.898150  956077 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0224 13:28:38.898189  956077 cni.go:84] Creating CNI manager for ""
	I0224 13:28:38.898247  956077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0224 13:28:38.898285  956077 start.go:340] cluster config:
	{Name:newest-cni-651381 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-651381 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0224 13:28:38.898383  956077 iso.go:125] acquiring lock: {Name:mk57408cca66a96a13d93cda43cdfac6e61aef3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 13:28:38.900247  956077 out.go:177] * Starting "newest-cni-651381" primary control-plane node in "newest-cni-651381" cluster
	I0224 13:28:38.901467  956077 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0224 13:28:38.901516  956077 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20451-887294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0224 13:28:38.901527  956077 cache.go:56] Caching tarball of preloaded images
	I0224 13:28:38.901613  956077 preload.go:172] Found /home/jenkins/minikube-integration/20451-887294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0224 13:28:38.901623  956077 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0224 13:28:38.901723  956077 profile.go:143] Saving config to /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/newest-cni-651381/config.json ...
	I0224 13:28:38.901897  956077 start.go:360] acquireMachinesLock for newest-cni-651381: {Name:mk023761b01bb629a1acd40bc8104cc517b0e15b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0224 13:28:38.901940  956077 start.go:364] duration metric: took 22.052µs to acquireMachinesLock for "newest-cni-651381"
	I0224 13:28:38.901954  956077 start.go:96] Skipping create...Using existing machine configuration
	I0224 13:28:38.901962  956077 fix.go:54] fixHost starting: 
	I0224 13:28:38.902241  956077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:28:38.902287  956077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:28:38.917188  956077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45863
	I0224 13:28:38.917773  956077 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:28:38.918380  956077 main.go:141] libmachine: Using API Version  1
	I0224 13:28:38.918452  956077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:28:38.918772  956077 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:28:38.918951  956077 main.go:141] libmachine: (newest-cni-651381) Calling .DriverName
	I0224 13:28:38.919074  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetState
	I0224 13:28:38.920729  956077 fix.go:112] recreateIfNeeded on newest-cni-651381: state=Stopped err=<nil>
	I0224 13:28:38.920774  956077 main.go:141] libmachine: (newest-cni-651381) Calling .DriverName
	W0224 13:28:38.920911  956077 fix.go:138] unexpected machine state, will restart: <nil>
	I0224 13:28:38.922862  956077 out.go:177] * Restarting existing kvm2 VM for "newest-cni-651381" ...
	I0224 13:28:38.924182  956077 main.go:141] libmachine: (newest-cni-651381) Calling .Start
	I0224 13:28:38.924366  956077 main.go:141] libmachine: (newest-cni-651381) starting domain...
	I0224 13:28:38.924388  956077 main.go:141] libmachine: (newest-cni-651381) ensuring networks are active...
	I0224 13:28:38.925130  956077 main.go:141] libmachine: (newest-cni-651381) Ensuring network default is active
	I0224 13:28:38.925476  956077 main.go:141] libmachine: (newest-cni-651381) Ensuring network mk-newest-cni-651381 is active
	I0224 13:28:38.925802  956077 main.go:141] libmachine: (newest-cni-651381) getting domain XML...
	I0224 13:28:38.926703  956077 main.go:141] libmachine: (newest-cni-651381) creating domain...
	I0224 13:28:40.156271  956077 main.go:141] libmachine: (newest-cni-651381) waiting for IP...
	I0224 13:28:40.157205  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:40.157681  956077 main.go:141] libmachine: (newest-cni-651381) DBG | unable to find current IP address of domain newest-cni-651381 in network mk-newest-cni-651381
	I0224 13:28:40.157772  956077 main.go:141] libmachine: (newest-cni-651381) DBG | I0224 13:28:40.157685  956112 retry.go:31] will retry after 260.668185ms: waiting for domain to come up
	I0224 13:28:40.420311  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:40.420800  956077 main.go:141] libmachine: (newest-cni-651381) DBG | unable to find current IP address of domain newest-cni-651381 in network mk-newest-cni-651381
	I0224 13:28:40.420848  956077 main.go:141] libmachine: (newest-cni-651381) DBG | I0224 13:28:40.420767  956112 retry.go:31] will retry after 303.764677ms: waiting for domain to come up
	I0224 13:28:40.726666  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:40.727228  956077 main.go:141] libmachine: (newest-cni-651381) DBG | unable to find current IP address of domain newest-cni-651381 in network mk-newest-cni-651381
	I0224 13:28:40.727281  956077 main.go:141] libmachine: (newest-cni-651381) DBG | I0224 13:28:40.727200  956112 retry.go:31] will retry after 355.373964ms: waiting for domain to come up
	I0224 13:28:41.083712  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:41.084293  956077 main.go:141] libmachine: (newest-cni-651381) DBG | unable to find current IP address of domain newest-cni-651381 in network mk-newest-cni-651381
	I0224 13:28:41.084350  956077 main.go:141] libmachine: (newest-cni-651381) DBG | I0224 13:28:41.084276  956112 retry.go:31] will retry after 470.293336ms: waiting for domain to come up
	I0224 13:28:41.556004  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:41.556503  956077 main.go:141] libmachine: (newest-cni-651381) DBG | unable to find current IP address of domain newest-cni-651381 in network mk-newest-cni-651381
	I0224 13:28:41.556533  956077 main.go:141] libmachine: (newest-cni-651381) DBG | I0224 13:28:41.556435  956112 retry.go:31] will retry after 528.413702ms: waiting for domain to come up
	I0224 13:28:42.086215  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:42.086654  956077 main.go:141] libmachine: (newest-cni-651381) DBG | unable to find current IP address of domain newest-cni-651381 in network mk-newest-cni-651381
	I0224 13:28:42.086688  956077 main.go:141] libmachine: (newest-cni-651381) DBG | I0224 13:28:42.086615  956112 retry.go:31] will retry after 758.532968ms: waiting for domain to come up
	I0224 13:28:42.846682  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:42.847289  956077 main.go:141] libmachine: (newest-cni-651381) DBG | unable to find current IP address of domain newest-cni-651381 in network mk-newest-cni-651381
	I0224 13:28:42.847316  956077 main.go:141] libmachine: (newest-cni-651381) DBG | I0224 13:28:42.847249  956112 retry.go:31] will retry after 771.163995ms: waiting for domain to come up
	I0224 13:28:43.620325  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:43.620953  956077 main.go:141] libmachine: (newest-cni-651381) DBG | unable to find current IP address of domain newest-cni-651381 in network mk-newest-cni-651381
	I0224 13:28:43.620987  956077 main.go:141] libmachine: (newest-cni-651381) DBG | I0224 13:28:43.620927  956112 retry.go:31] will retry after 1.349772038s: waiting for domain to come up
	I0224 13:28:44.971949  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:44.972514  956077 main.go:141] libmachine: (newest-cni-651381) DBG | unable to find current IP address of domain newest-cni-651381 in network mk-newest-cni-651381
	I0224 13:28:44.972544  956077 main.go:141] libmachine: (newest-cni-651381) DBG | I0224 13:28:44.972446  956112 retry.go:31] will retry after 1.187923617s: waiting for domain to come up
	I0224 13:28:46.161965  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:46.162503  956077 main.go:141] libmachine: (newest-cni-651381) DBG | unable to find current IP address of domain newest-cni-651381 in network mk-newest-cni-651381
	I0224 13:28:46.162523  956077 main.go:141] libmachine: (newest-cni-651381) DBG | I0224 13:28:46.162464  956112 retry.go:31] will retry after 2.129619904s: waiting for domain to come up
	I0224 13:28:48.294708  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:48.295258  956077 main.go:141] libmachine: (newest-cni-651381) DBG | unable to find current IP address of domain newest-cni-651381 in network mk-newest-cni-651381
	I0224 13:28:48.295292  956077 main.go:141] libmachine: (newest-cni-651381) DBG | I0224 13:28:48.295208  956112 retry.go:31] will retry after 2.033415833s: waiting for domain to come up
	I0224 13:28:50.330158  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:50.330661  956077 main.go:141] libmachine: (newest-cni-651381) DBG | unable to find current IP address of domain newest-cni-651381 in network mk-newest-cni-651381
	I0224 13:28:50.330693  956077 main.go:141] libmachine: (newest-cni-651381) DBG | I0224 13:28:50.330607  956112 retry.go:31] will retry after 3.415912416s: waiting for domain to come up
	I0224 13:28:53.750421  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:53.750924  956077 main.go:141] libmachine: (newest-cni-651381) DBG | unable to find current IP address of domain newest-cni-651381 in network mk-newest-cni-651381
	I0224 13:28:53.750982  956077 main.go:141] libmachine: (newest-cni-651381) DBG | I0224 13:28:53.750908  956112 retry.go:31] will retry after 3.200463394s: waiting for domain to come up
	I0224 13:28:56.955224  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:56.955868  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has current primary IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:56.955897  956077 main.go:141] libmachine: (newest-cni-651381) found domain IP: 192.168.39.43
	I0224 13:28:56.955914  956077 main.go:141] libmachine: (newest-cni-651381) reserving static IP address...
	I0224 13:28:56.956419  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "newest-cni-651381", mac: "52:54:00:1b:98:b8", ip: "192.168.39.43"} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:28:56.956465  956077 main.go:141] libmachine: (newest-cni-651381) DBG | skip adding static IP to network mk-newest-cni-651381 - found existing host DHCP lease matching {name: "newest-cni-651381", mac: "52:54:00:1b:98:b8", ip: "192.168.39.43"}
	I0224 13:28:56.956483  956077 main.go:141] libmachine: (newest-cni-651381) reserved static IP address 192.168.39.43 for domain newest-cni-651381
	I0224 13:28:56.956496  956077 main.go:141] libmachine: (newest-cni-651381) waiting for SSH...
	I0224 13:28:56.956507  956077 main.go:141] libmachine: (newest-cni-651381) DBG | Getting to WaitForSSH function...
	I0224 13:28:56.959046  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:56.959392  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:28:56.959427  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:56.959538  956077 main.go:141] libmachine: (newest-cni-651381) DBG | Using SSH client type: external
	I0224 13:28:56.959564  956077 main.go:141] libmachine: (newest-cni-651381) DBG | Using SSH private key: /home/jenkins/minikube-integration/20451-887294/.minikube/machines/newest-cni-651381/id_rsa (-rw-------)
	I0224 13:28:56.959630  956077 main.go:141] libmachine: (newest-cni-651381) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.43 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20451-887294/.minikube/machines/newest-cni-651381/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0224 13:28:56.959653  956077 main.go:141] libmachine: (newest-cni-651381) DBG | About to run SSH command:
	I0224 13:28:56.959689  956077 main.go:141] libmachine: (newest-cni-651381) DBG | exit 0
	I0224 13:28:57.089584  956077 main.go:141] libmachine: (newest-cni-651381) DBG | SSH cmd err, output: <nil>: 
	I0224 13:28:57.089980  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetConfigRaw
	I0224 13:28:57.090668  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetIP
	I0224 13:28:57.093149  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:57.093555  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:28:57.093576  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:57.093814  956077 profile.go:143] Saving config to /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/newest-cni-651381/config.json ...
	I0224 13:28:57.094015  956077 machine.go:93] provisionDockerMachine start ...
	I0224 13:28:57.094035  956077 main.go:141] libmachine: (newest-cni-651381) Calling .DriverName
	I0224 13:28:57.094293  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHHostname
	I0224 13:28:57.096640  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:57.097039  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:28:57.097068  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:57.097149  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHPort
	I0224 13:28:57.097351  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:28:57.097496  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:28:57.097643  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHUsername
	I0224 13:28:57.097810  956077 main.go:141] libmachine: Using SSH client type: native
	I0224 13:28:57.098046  956077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0224 13:28:57.098063  956077 main.go:141] libmachine: About to run SSH command:
	hostname
	I0224 13:28:57.218057  956077 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0224 13:28:57.218090  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetMachineName
	I0224 13:28:57.218365  956077 buildroot.go:166] provisioning hostname "newest-cni-651381"
	I0224 13:28:57.218404  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetMachineName
	I0224 13:28:57.218597  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHHostname
	I0224 13:28:57.221391  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:57.221750  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:28:57.221778  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:57.221974  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHPort
	I0224 13:28:57.222142  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:28:57.222294  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:28:57.222392  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHUsername
	I0224 13:28:57.222531  956077 main.go:141] libmachine: Using SSH client type: native
	I0224 13:28:57.222718  956077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0224 13:28:57.222731  956077 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-651381 && echo "newest-cni-651381" | sudo tee /etc/hostname
	I0224 13:28:57.354081  956077 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-651381
	
	I0224 13:28:57.354129  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHHostname
	I0224 13:28:57.357103  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:57.357516  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:28:57.357552  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:57.357765  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHPort
	I0224 13:28:57.357998  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:28:57.358156  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:28:57.358339  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHUsername
	I0224 13:28:57.358627  956077 main.go:141] libmachine: Using SSH client type: native
	I0224 13:28:57.358827  956077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0224 13:28:57.358843  956077 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-651381' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-651381/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-651381' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0224 13:28:57.483573  956077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0224 13:28:57.483608  956077 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20451-887294/.minikube CaCertPath:/home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20451-887294/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20451-887294/.minikube}
	I0224 13:28:57.483657  956077 buildroot.go:174] setting up certificates
	I0224 13:28:57.483671  956077 provision.go:84] configureAuth start
	I0224 13:28:57.483688  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetMachineName
	I0224 13:28:57.484035  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetIP
	I0224 13:28:57.486755  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:57.487062  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:28:57.487093  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:57.487216  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHHostname
	I0224 13:28:57.489282  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:57.489619  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:28:57.489647  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:57.489808  956077 provision.go:143] copyHostCerts
	I0224 13:28:57.489880  956077 exec_runner.go:144] found /home/jenkins/minikube-integration/20451-887294/.minikube/ca.pem, removing ...
	I0224 13:28:57.489894  956077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20451-887294/.minikube/ca.pem
	I0224 13:28:57.489977  956077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20451-887294/.minikube/ca.pem (1082 bytes)
	I0224 13:28:57.490110  956077 exec_runner.go:144] found /home/jenkins/minikube-integration/20451-887294/.minikube/cert.pem, removing ...
	I0224 13:28:57.490121  956077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20451-887294/.minikube/cert.pem
	I0224 13:28:57.490161  956077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20451-887294/.minikube/cert.pem (1123 bytes)
	I0224 13:28:57.490254  956077 exec_runner.go:144] found /home/jenkins/minikube-integration/20451-887294/.minikube/key.pem, removing ...
	I0224 13:28:57.490264  956077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20451-887294/.minikube/key.pem
	I0224 13:28:57.490300  956077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20451-887294/.minikube/key.pem (1679 bytes)
	I0224 13:28:57.490392  956077 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20451-887294/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca-key.pem org=jenkins.newest-cni-651381 san=[127.0.0.1 192.168.39.43 localhost minikube newest-cni-651381]
	I0224 13:28:57.603657  956077 provision.go:177] copyRemoteCerts
	I0224 13:28:57.603728  956077 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0224 13:28:57.603756  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHHostname
	I0224 13:28:57.606668  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:57.607001  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:28:57.607035  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:57.607186  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHPort
	I0224 13:28:57.607409  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:28:57.607596  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHUsername
	I0224 13:28:57.607747  956077 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/newest-cni-651381/id_rsa Username:docker}
	I0224 13:28:57.696271  956077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0224 13:28:57.720966  956077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0224 13:28:57.745080  956077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0224 13:28:57.770570  956077 provision.go:87] duration metric: took 286.877496ms to configureAuth
	I0224 13:28:57.770610  956077 buildroot.go:189] setting minikube options for container-runtime
	I0224 13:28:57.770819  956077 config.go:182] Loaded profile config "newest-cni-651381": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0224 13:28:57.770914  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHHostname
	I0224 13:28:57.773830  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:57.774134  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:28:57.774182  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:57.774374  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHPort
	I0224 13:28:57.774576  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:28:57.774725  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:28:57.774844  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHUsername
	I0224 13:28:57.774994  956077 main.go:141] libmachine: Using SSH client type: native
	I0224 13:28:57.775210  956077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0224 13:28:57.775229  956077 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0224 13:28:58.015198  956077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0224 13:28:58.015231  956077 machine.go:96] duration metric: took 921.200919ms to provisionDockerMachine
	I0224 13:28:58.015248  956077 start.go:293] postStartSetup for "newest-cni-651381" (driver="kvm2")
	I0224 13:28:58.015261  956077 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0224 13:28:58.015323  956077 main.go:141] libmachine: (newest-cni-651381) Calling .DriverName
	I0224 13:28:58.015781  956077 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0224 13:28:58.015825  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHHostname
	I0224 13:28:58.018588  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:58.018934  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:28:58.018957  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:58.019113  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHPort
	I0224 13:28:58.019321  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:28:58.019495  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHUsername
	I0224 13:28:58.019655  956077 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/newest-cni-651381/id_rsa Username:docker}
	I0224 13:28:58.108667  956077 ssh_runner.go:195] Run: cat /etc/os-release
	I0224 13:28:58.113192  956077 info.go:137] Remote host: Buildroot 2023.02.9
	I0224 13:28:58.113221  956077 filesync.go:126] Scanning /home/jenkins/minikube-integration/20451-887294/.minikube/addons for local assets ...
	I0224 13:28:58.113289  956077 filesync.go:126] Scanning /home/jenkins/minikube-integration/20451-887294/.minikube/files for local assets ...
	I0224 13:28:58.113387  956077 filesync.go:149] local asset: /home/jenkins/minikube-integration/20451-887294/.minikube/files/etc/ssl/certs/8945642.pem -> 8945642.pem in /etc/ssl/certs
	I0224 13:28:58.113476  956077 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0224 13:28:58.123292  956077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/files/etc/ssl/certs/8945642.pem --> /etc/ssl/certs/8945642.pem (1708 bytes)
	I0224 13:28:58.150288  956077 start.go:296] duration metric: took 135.022634ms for postStartSetup
	I0224 13:28:58.150340  956077 fix.go:56] duration metric: took 19.248378049s for fixHost
	I0224 13:28:58.150364  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHHostname
	I0224 13:28:58.152951  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:58.153283  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:28:58.153338  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:58.153514  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHPort
	I0224 13:28:58.153706  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:28:58.153862  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:28:58.154044  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHUsername
	I0224 13:28:58.154233  956077 main.go:141] libmachine: Using SSH client type: native
	I0224 13:28:58.154467  956077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0224 13:28:58.154479  956077 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0224 13:28:58.270588  956077 main.go:141] libmachine: SSH cmd err, output: <nil>: 1740403738.235399997
	
	I0224 13:28:58.270619  956077 fix.go:216] guest clock: 1740403738.235399997
	I0224 13:28:58.270629  956077 fix.go:229] Guest: 2025-02-24 13:28:58.235399997 +0000 UTC Remote: 2025-02-24 13:28:58.150345054 +0000 UTC m=+19.397261834 (delta=85.054943ms)
	I0224 13:28:58.270676  956077 fix.go:200] guest clock delta is within tolerance: 85.054943ms
	I0224 13:28:58.270685  956077 start.go:83] releasing machines lock for "newest-cni-651381", held for 19.368735573s
	I0224 13:28:58.270712  956077 main.go:141] libmachine: (newest-cni-651381) Calling .DriverName
	I0224 13:28:58.271039  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetIP
	I0224 13:28:58.273607  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:58.274111  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:28:58.274137  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:58.274333  956077 main.go:141] libmachine: (newest-cni-651381) Calling .DriverName
	I0224 13:28:58.274936  956077 main.go:141] libmachine: (newest-cni-651381) Calling .DriverName
	I0224 13:28:58.275139  956077 main.go:141] libmachine: (newest-cni-651381) Calling .DriverName
	I0224 13:28:58.275266  956077 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0224 13:28:58.275326  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHHostname
	I0224 13:28:58.275372  956077 ssh_runner.go:195] Run: cat /version.json
	I0224 13:28:58.275401  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHHostname
	I0224 13:28:58.278276  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:58.278682  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:28:58.278713  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:58.278732  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:58.278841  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHPort
	I0224 13:28:58.279035  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:28:58.279101  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:28:58.279129  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:58.279314  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHPort
	I0224 13:28:58.279344  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHUsername
	I0224 13:28:58.279459  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:28:58.279555  956077 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/newest-cni-651381/id_rsa Username:docker}
	I0224 13:28:58.279604  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHUsername
	I0224 13:28:58.279716  956077 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/newest-cni-651381/id_rsa Username:docker}
	I0224 13:28:58.363123  956077 ssh_runner.go:195] Run: systemctl --version
	I0224 13:28:58.385513  956077 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0224 13:28:58.537461  956077 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0224 13:28:58.543840  956077 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0224 13:28:58.543916  956077 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0224 13:28:58.562167  956077 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0224 13:28:58.562203  956077 start.go:495] detecting cgroup driver to use...
	I0224 13:28:58.562288  956077 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0224 13:28:58.580754  956077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0224 13:28:58.595609  956077 docker.go:217] disabling cri-docker service (if available) ...
	I0224 13:28:58.595684  956077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0224 13:28:58.610441  956077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0224 13:28:58.625512  956077 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0224 13:28:58.742160  956077 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0224 13:28:58.897257  956077 docker.go:233] disabling docker service ...
	I0224 13:28:58.897354  956077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0224 13:28:58.913053  956077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0224 13:28:58.927511  956077 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0224 13:28:59.078303  956077 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0224 13:28:59.190231  956077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0224 13:28:59.205007  956077 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 13:28:59.224899  956077 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0224 13:28:59.224959  956077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 13:28:59.235985  956077 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0224 13:28:59.236076  956077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 13:28:59.247262  956077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 13:28:59.258419  956077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 13:28:59.269559  956077 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0224 13:28:59.281485  956077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 13:28:59.293207  956077 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 13:28:59.312591  956077 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 13:28:59.324339  956077 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0224 13:28:59.334891  956077 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0224 13:28:59.334973  956077 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0224 13:28:59.349831  956077 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0224 13:28:59.360347  956077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 13:28:59.479779  956077 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0224 13:28:59.577405  956077 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0224 13:28:59.577519  956077 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0224 13:28:59.583030  956077 start.go:563] Will wait 60s for crictl version
	I0224 13:28:59.583098  956077 ssh_runner.go:195] Run: which crictl
	I0224 13:28:59.587159  956077 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0224 13:28:59.625913  956077 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0224 13:28:59.626017  956077 ssh_runner.go:195] Run: crio --version
	I0224 13:28:59.656040  956077 ssh_runner.go:195] Run: crio --version
	I0224 13:28:59.690484  956077 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0224 13:28:59.691655  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetIP
	I0224 13:28:59.694827  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:59.695279  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:28:59.695313  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:59.695529  956077 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0224 13:28:59.700214  956077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 13:28:59.714858  956077 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0224 13:28:59.716146  956077 kubeadm.go:883] updating cluster {Name:newest-cni-651381 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-6
51381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddr
ess: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0224 13:28:59.716344  956077 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0224 13:28:59.716441  956077 ssh_runner.go:195] Run: sudo crictl images --output json
	I0224 13:28:59.759022  956077 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0224 13:28:59.759106  956077 ssh_runner.go:195] Run: which lz4
	I0224 13:28:59.763641  956077 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0224 13:28:59.768063  956077 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0224 13:28:59.768104  956077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0224 13:29:01.313361  956077 crio.go:462] duration metric: took 1.549763964s to copy over tarball
	I0224 13:29:01.313502  956077 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0224 13:29:03.649181  956077 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.335640797s)
	I0224 13:29:03.649213  956077 crio.go:469] duration metric: took 2.335814633s to extract the tarball
	I0224 13:29:03.649221  956077 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0224 13:29:03.687968  956077 ssh_runner.go:195] Run: sudo crictl images --output json
	I0224 13:29:03.741442  956077 crio.go:514] all images are preloaded for cri-o runtime.
	I0224 13:29:03.741478  956077 cache_images.go:84] Images are preloaded, skipping loading
	I0224 13:29:03.741490  956077 kubeadm.go:934] updating node { 192.168.39.43 8443 v1.32.2 crio true true} ...
	I0224 13:29:03.741662  956077 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-651381 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.43
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:newest-cni-651381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0224 13:29:03.741787  956077 ssh_runner.go:195] Run: crio config
	I0224 13:29:03.799716  956077 cni.go:84] Creating CNI manager for ""
	I0224 13:29:03.799747  956077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0224 13:29:03.799764  956077 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0224 13:29:03.799794  956077 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.43 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-651381 NodeName:newest-cni-651381 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.43"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.43 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0224 13:29:03.799960  956077 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.43
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-651381"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.43"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.43"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0224 13:29:03.800042  956077 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0224 13:29:03.811912  956077 binaries.go:44] Found k8s binaries, skipping transfer
	I0224 13:29:03.812012  956077 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0224 13:29:03.823338  956077 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0224 13:29:03.842685  956077 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0224 13:29:03.861976  956077 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0224 13:29:03.882258  956077 ssh_runner.go:195] Run: grep 192.168.39.43	control-plane.minikube.internal$ /etc/hosts
	I0224 13:29:03.887084  956077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.43	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 13:29:03.902004  956077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 13:29:04.052713  956077 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0224 13:29:04.071828  956077 certs.go:68] Setting up /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/newest-cni-651381 for IP: 192.168.39.43
	I0224 13:29:04.071866  956077 certs.go:194] generating shared ca certs ...
	I0224 13:29:04.071893  956077 certs.go:226] acquiring lock for ca certs: {Name:mk38777c6b180f63d1816020cff79a01106ddf33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 13:29:04.072105  956077 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20451-887294/.minikube/ca.key
	I0224 13:29:04.072202  956077 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20451-887294/.minikube/proxy-client-ca.key
	I0224 13:29:04.072219  956077 certs.go:256] generating profile certs ...
	I0224 13:29:04.072346  956077 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/newest-cni-651381/client.key
	I0224 13:29:04.072430  956077 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/newest-cni-651381/apiserver.key.5ef52652
	I0224 13:29:04.072487  956077 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/newest-cni-651381/proxy-client.key
	I0224 13:29:04.072689  956077 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/894564.pem (1338 bytes)
	W0224 13:29:04.072726  956077 certs.go:480] ignoring /home/jenkins/minikube-integration/20451-887294/.minikube/certs/894564_empty.pem, impossibly tiny 0 bytes
	I0224 13:29:04.072737  956077 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca-key.pem (1679 bytes)
	I0224 13:29:04.072760  956077 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem (1082 bytes)
	I0224 13:29:04.072785  956077 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/cert.pem (1123 bytes)
	I0224 13:29:04.072809  956077 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/key.pem (1679 bytes)
	I0224 13:29:04.072844  956077 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/files/etc/ssl/certs/8945642.pem (1708 bytes)
	I0224 13:29:04.073566  956077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0224 13:29:04.112077  956077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0224 13:29:04.149068  956077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0224 13:29:04.179616  956077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0224 13:29:04.209417  956077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/newest-cni-651381/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0224 13:29:04.245961  956077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/newest-cni-651381/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0224 13:29:04.279758  956077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/newest-cni-651381/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0224 13:29:04.306976  956077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/newest-cni-651381/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0224 13:29:04.334286  956077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0224 13:29:04.361320  956077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/certs/894564.pem --> /usr/share/ca-certificates/894564.pem (1338 bytes)
	I0224 13:29:04.387966  956077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/files/etc/ssl/certs/8945642.pem --> /usr/share/ca-certificates/8945642.pem (1708 bytes)
	I0224 13:29:04.414747  956077 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0224 13:29:04.433921  956077 ssh_runner.go:195] Run: openssl version
	I0224 13:29:04.440667  956077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8945642.pem && ln -fs /usr/share/ca-certificates/8945642.pem /etc/ssl/certs/8945642.pem"
	I0224 13:29:04.453454  956077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8945642.pem
	I0224 13:29:04.459040  956077 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 24 12:09 /usr/share/ca-certificates/8945642.pem
	I0224 13:29:04.459108  956077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8945642.pem
	I0224 13:29:04.466078  956077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8945642.pem /etc/ssl/certs/3ec20f2e.0"
	I0224 13:29:04.478970  956077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0224 13:29:04.491228  956077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0224 13:29:04.496708  956077 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 24 12:01 /usr/share/ca-certificates/minikubeCA.pem
	I0224 13:29:04.496771  956077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0224 13:29:04.503067  956077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0224 13:29:04.515240  956077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/894564.pem && ln -fs /usr/share/ca-certificates/894564.pem /etc/ssl/certs/894564.pem"
	I0224 13:29:04.527524  956077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/894564.pem
	I0224 13:29:04.532779  956077 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 24 12:09 /usr/share/ca-certificates/894564.pem
	I0224 13:29:04.532845  956077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/894564.pem
	I0224 13:29:04.539425  956077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/894564.pem /etc/ssl/certs/51391683.0"
	I0224 13:29:04.551398  956077 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0224 13:29:04.556720  956077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0224 13:29:04.566700  956077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0224 13:29:04.573865  956077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0224 13:29:04.580856  956077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0224 13:29:04.588174  956077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0224 13:29:04.595837  956077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0224 13:29:04.603384  956077 kubeadm.go:392] StartCluster: {Name:newest-cni-651381 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-6513
81 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress
: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0224 13:29:04.603508  956077 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0224 13:29:04.603592  956077 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0224 13:29:04.647022  956077 cri.go:89] found id: ""
	I0224 13:29:04.647118  956077 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0224 13:29:04.658566  956077 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0224 13:29:04.658595  956077 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0224 13:29:04.658664  956077 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0224 13:29:04.669446  956077 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0224 13:29:04.670107  956077 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-651381" does not appear in /home/jenkins/minikube-integration/20451-887294/kubeconfig
	I0224 13:29:04.670340  956077 kubeconfig.go:62] /home/jenkins/minikube-integration/20451-887294/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-651381" cluster setting kubeconfig missing "newest-cni-651381" context setting]
	I0224 13:29:04.670763  956077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-887294/kubeconfig: {Name:mk0122b69f41cd40d5267f436266ccce22ce5ef2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 13:29:04.703477  956077 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0224 13:29:04.714783  956077 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.43
	I0224 13:29:04.714826  956077 kubeadm.go:1160] stopping kube-system containers ...
	I0224 13:29:04.714856  956077 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0224 13:29:04.714926  956077 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0224 13:29:04.753447  956077 cri.go:89] found id: ""
	I0224 13:29:04.753549  956077 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0224 13:29:04.771436  956077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0224 13:29:04.782526  956077 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0224 13:29:04.782550  956077 kubeadm.go:157] found existing configuration files:
	
	I0224 13:29:04.782599  956077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0224 13:29:04.793248  956077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0224 13:29:04.793349  956077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0224 13:29:04.804033  956077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0224 13:29:04.814167  956077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0224 13:29:04.814256  956077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0224 13:29:04.824390  956077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0224 13:29:04.835928  956077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0224 13:29:04.836009  956077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0224 13:29:04.846849  956077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0224 13:29:04.857291  956077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0224 13:29:04.857371  956077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0224 13:29:04.868432  956077 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0224 13:29:04.879429  956077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0224 13:29:05.016556  956077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0224 13:29:05.855312  956077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0224 13:29:06.068970  956077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0224 13:29:06.138545  956077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0224 13:29:06.252222  956077 api_server.go:52] waiting for apiserver process to appear ...
	I0224 13:29:06.252315  956077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:29:06.752623  956077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:29:07.253475  956077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:29:07.273087  956077 api_server.go:72] duration metric: took 1.020861784s to wait for apiserver process to appear ...
	I0224 13:29:07.273129  956077 api_server.go:88] waiting for apiserver healthz status ...
	I0224 13:29:07.273156  956077 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I0224 13:29:07.273777  956077 api_server.go:269] stopped: https://192.168.39.43:8443/healthz: Get "https://192.168.39.43:8443/healthz": dial tcp 192.168.39.43:8443: connect: connection refused
	I0224 13:29:07.773461  956077 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I0224 13:29:10.395720  956077 api_server.go:279] https://192.168.39.43:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0224 13:29:10.395756  956077 api_server.go:103] status: https://192.168.39.43:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0224 13:29:10.395777  956077 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I0224 13:29:10.424020  956077 api_server.go:279] https://192.168.39.43:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0224 13:29:10.424060  956077 api_server.go:103] status: https://192.168.39.43:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0224 13:29:10.773537  956077 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I0224 13:29:10.778715  956077 api_server.go:279] https://192.168.39.43:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0224 13:29:10.778749  956077 api_server.go:103] status: https://192.168.39.43:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0224 13:29:11.273360  956077 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I0224 13:29:11.282850  956077 api_server.go:279] https://192.168.39.43:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0224 13:29:11.282888  956077 api_server.go:103] status: https://192.168.39.43:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0224 13:29:11.773530  956077 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I0224 13:29:11.782399  956077 api_server.go:279] https://192.168.39.43:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0224 13:29:11.782431  956077 api_server.go:103] status: https://192.168.39.43:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0224 13:29:12.274112  956077 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I0224 13:29:12.279760  956077 api_server.go:279] https://192.168.39.43:8443/healthz returned 200:
	ok
	I0224 13:29:12.286489  956077 api_server.go:141] control plane version: v1.32.2
	I0224 13:29:12.286522  956077 api_server.go:131] duration metric: took 5.013385837s to wait for apiserver health ...
	I0224 13:29:12.286533  956077 cni.go:84] Creating CNI manager for ""
	I0224 13:29:12.286540  956077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0224 13:29:12.288455  956077 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0224 13:29:12.289765  956077 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0224 13:29:12.302198  956077 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0224 13:29:12.341287  956077 system_pods.go:43] waiting for kube-system pods to appear ...
	I0224 13:29:12.353152  956077 system_pods.go:59] 8 kube-system pods found
	I0224 13:29:12.353227  956077 system_pods.go:61] "coredns-668d6bf9bc-5fzqg" [081ec828-51bc-43dd-8eb5-50027cd1e5ce] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0224 13:29:12.353242  956077 system_pods.go:61] "etcd-newest-cni-651381" [49ed84ef-a3f9-41e6-969d-9c36df52bd1e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0224 13:29:12.353256  956077 system_pods.go:61] "kube-apiserver-newest-cni-651381" [3fc7c3f3-60dd-4be5-83d3-43fff952ccb0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0224 13:29:12.353266  956077 system_pods.go:61] "kube-controller-manager-newest-cni-651381" [f24e71f1-80e9-408a-b3d9-ad900b5e1955] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0224 13:29:12.353282  956077 system_pods.go:61] "kube-proxy-lh4cg" [024a70db-68c8-4faf-9072-9957034b592a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0224 13:29:12.353292  956077 system_pods.go:61] "kube-scheduler-newest-cni-651381" [9afed0fd-e49a-4d28-9504-1562a04fbb7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0224 13:29:12.353335  956077 system_pods.go:61] "metrics-server-f79f97bbb-zcgjt" [6afaa917-e3b5-4c04-8853-4936ba182e4b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0224 13:29:12.353346  956077 system_pods.go:61] "storage-provisioner" [dd4ee237-b34c-481b-8a9d-ff296eca352b] Running
	I0224 13:29:12.353359  956077 system_pods.go:74] duration metric: took 12.029012ms to wait for pod list to return data ...
	I0224 13:29:12.353373  956077 node_conditions.go:102] verifying NodePressure condition ...
	I0224 13:29:12.364913  956077 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0224 13:29:12.364957  956077 node_conditions.go:123] node cpu capacity is 2
	I0224 13:29:12.364975  956077 node_conditions.go:105] duration metric: took 11.585246ms to run NodePressure ...
	I0224 13:29:12.365016  956077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0224 13:29:12.738521  956077 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0224 13:29:12.751756  956077 ops.go:34] apiserver oom_adj: -16
	I0224 13:29:12.751784  956077 kubeadm.go:597] duration metric: took 8.093182521s to restartPrimaryControlPlane
	I0224 13:29:12.751797  956077 kubeadm.go:394] duration metric: took 8.148429756s to StartCluster
	I0224 13:29:12.751815  956077 settings.go:142] acquiring lock: {Name:mk663e441d32b04abcccdab86db3e15276e74de8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 13:29:12.751904  956077 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20451-887294/kubeconfig
	I0224 13:29:12.752732  956077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-887294/kubeconfig: {Name:mk0122b69f41cd40d5267f436266ccce22ce5ef2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 13:29:12.753015  956077 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0224 13:29:12.753115  956077 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0224 13:29:12.753237  956077 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-651381"
	I0224 13:29:12.753262  956077 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-651381"
	W0224 13:29:12.753270  956077 addons.go:247] addon storage-provisioner should already be in state true
	I0224 13:29:12.753272  956077 addons.go:69] Setting default-storageclass=true in profile "newest-cni-651381"
	I0224 13:29:12.753291  956077 config.go:182] Loaded profile config "newest-cni-651381": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0224 13:29:12.753300  956077 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-651381"
	I0224 13:29:12.753324  956077 host.go:66] Checking if "newest-cni-651381" exists ...
	I0224 13:29:12.753334  956077 addons.go:69] Setting dashboard=true in profile "newest-cni-651381"
	I0224 13:29:12.753345  956077 addons.go:69] Setting metrics-server=true in profile "newest-cni-651381"
	I0224 13:29:12.753365  956077 addons.go:238] Setting addon dashboard=true in "newest-cni-651381"
	I0224 13:29:12.753372  956077 addons.go:238] Setting addon metrics-server=true in "newest-cni-651381"
	W0224 13:29:12.753382  956077 addons.go:247] addon dashboard should already be in state true
	W0224 13:29:12.753389  956077 addons.go:247] addon metrics-server should already be in state true
	I0224 13:29:12.753419  956077 host.go:66] Checking if "newest-cni-651381" exists ...
	I0224 13:29:12.753424  956077 host.go:66] Checking if "newest-cni-651381" exists ...
	I0224 13:29:12.753799  956077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:29:12.753809  956077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:29:12.753844  956077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:29:12.753852  956077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:29:12.753859  956077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:29:12.753877  956077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:29:12.753896  956077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:29:12.753907  956077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:29:12.756327  956077 out.go:177] * Verifying Kubernetes components...
	I0224 13:29:12.757988  956077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 13:29:12.770827  956077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41209
	I0224 13:29:12.771035  956077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34785
	I0224 13:29:12.771532  956077 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:29:12.771609  956077 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:29:12.772161  956077 main.go:141] libmachine: Using API Version  1
	I0224 13:29:12.772186  956077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:29:12.772228  956077 main.go:141] libmachine: Using API Version  1
	I0224 13:29:12.772250  956077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:29:12.772280  956077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46419
	I0224 13:29:12.772345  956077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39593
	I0224 13:29:12.772705  956077 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:29:12.772733  956077 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:29:12.772777  956077 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:29:12.772856  956077 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:29:12.772908  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetState
	I0224 13:29:12.773495  956077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:29:12.773541  956077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:29:12.773925  956077 main.go:141] libmachine: Using API Version  1
	I0224 13:29:12.773937  956077 main.go:141] libmachine: Using API Version  1
	I0224 13:29:12.773948  956077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:29:12.773953  956077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:29:12.774427  956077 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:29:12.774735  956077 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:29:12.775094  956077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:29:12.775132  956077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:29:12.775346  956077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:29:12.775386  956077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:29:12.790773  956077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38083
	I0224 13:29:12.791279  956077 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:29:12.791520  956077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37107
	I0224 13:29:12.791793  956077 main.go:141] libmachine: Using API Version  1
	I0224 13:29:12.791815  956077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:29:12.792028  956077 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:29:12.792228  956077 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:29:12.792458  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetState
	I0224 13:29:12.792693  956077 main.go:141] libmachine: Using API Version  1
	I0224 13:29:12.792728  956077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:29:12.793147  956077 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:29:12.793354  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetState
	I0224 13:29:12.794339  956077 main.go:141] libmachine: (newest-cni-651381) Calling .DriverName
	I0224 13:29:12.795159  956077 main.go:141] libmachine: (newest-cni-651381) Calling .DriverName
	I0224 13:29:12.796980  956077 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0224 13:29:12.797044  956077 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0224 13:29:12.798873  956077 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0224 13:29:12.798897  956077 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0224 13:29:12.798924  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHHostname
	I0224 13:29:12.799025  956077 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0224 13:29:12.800379  956077 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0224 13:29:12.800413  956077 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0224 13:29:12.800444  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHHostname
	I0224 13:29:12.802889  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:29:12.803112  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:29:12.803154  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:29:12.803253  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHPort
	I0224 13:29:12.803514  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:29:12.803684  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHUsername
	I0224 13:29:12.803835  956077 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/newest-cni-651381/id_rsa Username:docker}
	I0224 13:29:12.804218  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:29:12.804781  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:29:12.804865  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:29:12.804986  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHPort
	I0224 13:29:12.805169  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:29:12.805331  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHUsername
	I0224 13:29:12.805504  956077 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/newest-cni-651381/id_rsa Username:docker}
	I0224 13:29:12.805863  956077 addons.go:238] Setting addon default-storageclass=true in "newest-cni-651381"
	W0224 13:29:12.805886  956077 addons.go:247] addon default-storageclass should already be in state true
	I0224 13:29:12.805921  956077 host.go:66] Checking if "newest-cni-651381" exists ...
	I0224 13:29:12.806263  956077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:29:12.806310  956077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:29:12.822073  956077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36359
	I0224 13:29:12.822078  956077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33077
	I0224 13:29:12.822532  956077 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:29:12.822608  956077 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:29:12.823097  956077 main.go:141] libmachine: Using API Version  1
	I0224 13:29:12.823120  956077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:29:12.823190  956077 main.go:141] libmachine: Using API Version  1
	I0224 13:29:12.823208  956077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:29:12.823472  956077 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:29:12.823587  956077 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:29:12.823766  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetState
	I0224 13:29:12.824054  956077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:29:12.824092  956077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:29:12.825722  956077 main.go:141] libmachine: (newest-cni-651381) Calling .DriverName
	I0224 13:29:12.827968  956077 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0224 13:29:12.829697  956077 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0224 13:29:12.829721  956077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0224 13:29:12.829743  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHHostname
	I0224 13:29:12.833829  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:29:12.834243  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:29:12.834272  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:29:12.834576  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHPort
	I0224 13:29:12.834868  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:29:12.835030  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHUsername
	I0224 13:29:12.835176  956077 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/newest-cni-651381/id_rsa Username:docker}
	I0224 13:29:12.841346  956077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45733
	I0224 13:29:12.841788  956077 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:29:12.842314  956077 main.go:141] libmachine: Using API Version  1
	I0224 13:29:12.842345  956077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:29:12.842757  956077 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:29:12.842974  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetState
	I0224 13:29:12.844679  956077 main.go:141] libmachine: (newest-cni-651381) Calling .DriverName
	I0224 13:29:12.844903  956077 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0224 13:29:12.844923  956077 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0224 13:29:12.844944  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHHostname
	I0224 13:29:12.847773  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:29:12.848236  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:29:12.848274  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:29:12.848424  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHPort
	I0224 13:29:12.848652  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:29:12.848819  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHUsername
	I0224 13:29:12.848952  956077 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/newest-cni-651381/id_rsa Username:docker}
	I0224 13:29:12.994330  956077 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0224 13:29:13.013328  956077 api_server.go:52] waiting for apiserver process to appear ...
	I0224 13:29:13.013419  956077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:29:13.031907  956077 api_server.go:72] duration metric: took 278.851886ms to wait for apiserver process to appear ...
	I0224 13:29:13.031946  956077 api_server.go:88] waiting for apiserver healthz status ...
	I0224 13:29:13.031974  956077 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I0224 13:29:13.037741  956077 api_server.go:279] https://192.168.39.43:8443/healthz returned 200:
	ok
	I0224 13:29:13.038717  956077 api_server.go:141] control plane version: v1.32.2
	I0224 13:29:13.038740  956077 api_server.go:131] duration metric: took 6.786687ms to wait for apiserver health ...
	I0224 13:29:13.038749  956077 system_pods.go:43] waiting for kube-system pods to appear ...
	I0224 13:29:13.041638  956077 system_pods.go:59] 8 kube-system pods found
	I0224 13:29:13.041677  956077 system_pods.go:61] "coredns-668d6bf9bc-5fzqg" [081ec828-51bc-43dd-8eb5-50027cd1e5ce] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0224 13:29:13.041689  956077 system_pods.go:61] "etcd-newest-cni-651381" [49ed84ef-a3f9-41e6-969d-9c36df52bd1e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0224 13:29:13.041699  956077 system_pods.go:61] "kube-apiserver-newest-cni-651381" [3fc7c3f3-60dd-4be5-83d3-43fff952ccb0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0224 13:29:13.041707  956077 system_pods.go:61] "kube-controller-manager-newest-cni-651381" [f24e71f1-80e9-408a-b3d9-ad900b5e1955] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0224 13:29:13.041713  956077 system_pods.go:61] "kube-proxy-lh4cg" [024a70db-68c8-4faf-9072-9957034b592a] Running
	I0224 13:29:13.041723  956077 system_pods.go:61] "kube-scheduler-newest-cni-651381" [9afed0fd-e49a-4d28-9504-1562a04fbb7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0224 13:29:13.041734  956077 system_pods.go:61] "metrics-server-f79f97bbb-zcgjt" [6afaa917-e3b5-4c04-8853-4936ba182e4b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0224 13:29:13.041744  956077 system_pods.go:61] "storage-provisioner" [dd4ee237-b34c-481b-8a9d-ff296eca352b] Running
	I0224 13:29:13.041755  956077 system_pods.go:74] duration metric: took 2.998451ms to wait for pod list to return data ...
	I0224 13:29:13.041769  956077 default_sa.go:34] waiting for default service account to be created ...
	I0224 13:29:13.045370  956077 default_sa.go:45] found service account: "default"
	I0224 13:29:13.045406  956077 default_sa.go:55] duration metric: took 3.628344ms for default service account to be created ...
	I0224 13:29:13.045423  956077 kubeadm.go:582] duration metric: took 292.373047ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0224 13:29:13.045461  956077 node_conditions.go:102] verifying NodePressure condition ...
	I0224 13:29:13.048412  956077 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0224 13:29:13.048450  956077 node_conditions.go:123] node cpu capacity is 2
	I0224 13:29:13.048465  956077 node_conditions.go:105] duration metric: took 2.99453ms to run NodePressure ...
	I0224 13:29:13.048482  956077 start.go:241] waiting for startup goroutines ...
	I0224 13:29:13.107171  956077 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0224 13:29:13.107201  956077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0224 13:29:13.119071  956077 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0224 13:29:13.119103  956077 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0224 13:29:13.134996  956077 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0224 13:29:13.135034  956077 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0224 13:29:13.155551  956077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0224 13:29:13.185957  956077 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0224 13:29:13.185995  956077 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0224 13:29:13.186048  956077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0224 13:29:13.188044  956077 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0224 13:29:13.188069  956077 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0224 13:29:13.231557  956077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0224 13:29:13.247560  956077 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0224 13:29:13.247593  956077 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0224 13:29:13.353680  956077 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0224 13:29:13.353706  956077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0224 13:29:13.453436  956077 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0224 13:29:13.453467  956077 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0224 13:29:13.612651  956077 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0224 13:29:13.612689  956077 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0224 13:29:13.761435  956077 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0224 13:29:13.761484  956077 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0224 13:29:11.224324  953268 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0224 13:29:11.225286  953268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:29:11.225572  953268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:29:13.875252  956077 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0224 13:29:13.875291  956077 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0224 13:29:13.988211  956077 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0224 13:29:13.988245  956077 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0224 13:29:14.040504  956077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0224 13:29:14.735719  956077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.580126907s)
	I0224 13:29:14.735772  956077 main.go:141] libmachine: Making call to close driver server
	I0224 13:29:14.735781  956077 main.go:141] libmachine: (newest-cni-651381) Calling .Close
	I0224 13:29:14.735890  956077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.549811056s)
	I0224 13:29:14.735948  956077 main.go:141] libmachine: Making call to close driver server
	I0224 13:29:14.735960  956077 main.go:141] libmachine: (newest-cni-651381) Calling .Close
	I0224 13:29:14.736196  956077 main.go:141] libmachine: (newest-cni-651381) DBG | Closing plugin on server side
	I0224 13:29:14.736226  956077 main.go:141] libmachine: Successfully made call to close driver server
	I0224 13:29:14.736242  956077 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 13:29:14.736258  956077 main.go:141] libmachine: Making call to close driver server
	I0224 13:29:14.736272  956077 main.go:141] libmachine: (newest-cni-651381) Calling .Close
	I0224 13:29:14.736296  956077 main.go:141] libmachine: Successfully made call to close driver server
	I0224 13:29:14.736311  956077 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 13:29:14.736321  956077 main.go:141] libmachine: Making call to close driver server
	I0224 13:29:14.736344  956077 main.go:141] libmachine: (newest-cni-651381) Calling .Close
	I0224 13:29:14.736595  956077 main.go:141] libmachine: Successfully made call to close driver server
	I0224 13:29:14.736611  956077 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 13:29:14.736658  956077 main.go:141] libmachine: (newest-cni-651381) DBG | Closing plugin on server side
	I0224 13:29:14.736872  956077 main.go:141] libmachine: Successfully made call to close driver server
	I0224 13:29:14.736892  956077 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 13:29:14.745116  956077 main.go:141] libmachine: Making call to close driver server
	I0224 13:29:14.745148  956077 main.go:141] libmachine: (newest-cni-651381) Calling .Close
	I0224 13:29:14.745492  956077 main.go:141] libmachine: Successfully made call to close driver server
	I0224 13:29:14.745517  956077 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 13:29:14.745526  956077 main.go:141] libmachine: (newest-cni-651381) DBG | Closing plugin on server side
	I0224 13:29:14.894767  956077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.663157775s)
	I0224 13:29:14.894851  956077 main.go:141] libmachine: Making call to close driver server
	I0224 13:29:14.894872  956077 main.go:141] libmachine: (newest-cni-651381) Calling .Close
	I0224 13:29:14.895200  956077 main.go:141] libmachine: Successfully made call to close driver server
	I0224 13:29:14.895223  956077 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 13:29:14.895234  956077 main.go:141] libmachine: Making call to close driver server
	I0224 13:29:14.895241  956077 main.go:141] libmachine: (newest-cni-651381) Calling .Close
	I0224 13:29:14.895512  956077 main.go:141] libmachine: Successfully made call to close driver server
	I0224 13:29:14.895531  956077 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 13:29:14.895543  956077 addons.go:479] Verifying addon metrics-server=true in "newest-cni-651381"
	I0224 13:29:15.529417  956077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.48881961s)
	I0224 13:29:15.529510  956077 main.go:141] libmachine: Making call to close driver server
	I0224 13:29:15.529526  956077 main.go:141] libmachine: (newest-cni-651381) Calling .Close
	I0224 13:29:15.529885  956077 main.go:141] libmachine: (newest-cni-651381) DBG | Closing plugin on server side
	I0224 13:29:15.529896  956077 main.go:141] libmachine: Successfully made call to close driver server
	I0224 13:29:15.529910  956077 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 13:29:15.529921  956077 main.go:141] libmachine: Making call to close driver server
	I0224 13:29:15.529930  956077 main.go:141] libmachine: (newest-cni-651381) Calling .Close
	I0224 13:29:15.530216  956077 main.go:141] libmachine: Successfully made call to close driver server
	I0224 13:29:15.530235  956077 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 13:29:15.532337  956077 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-651381 addons enable metrics-server
	
	I0224 13:29:15.534011  956077 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0224 13:29:15.535543  956077 addons.go:514] duration metric: took 2.78244386s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0224 13:29:15.535586  956077 start.go:246] waiting for cluster config update ...
	I0224 13:29:15.535599  956077 start.go:255] writing updated cluster config ...
	I0224 13:29:15.535868  956077 ssh_runner.go:195] Run: rm -f paused
	I0224 13:29:15.604806  956077 start.go:600] kubectl: 1.32.2, cluster: 1.32.2 (minor skew: 0)
	I0224 13:29:15.606756  956077 out.go:177] * Done! kubectl is now configured to use "newest-cni-651381" cluster and "default" namespace by default
	I0224 13:29:16.226144  953268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:29:16.226358  953268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:29:26.227187  953268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:29:26.227476  953268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:29:46.228012  953268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:29:46.228297  953268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:30:26.229952  953268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:30:26.230229  953268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:30:26.230260  953268 kubeadm.go:310] 
	I0224 13:30:26.230300  953268 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0224 13:30:26.230364  953268 kubeadm.go:310] 		timed out waiting for the condition
	I0224 13:30:26.230392  953268 kubeadm.go:310] 
	I0224 13:30:26.230441  953268 kubeadm.go:310] 	This error is likely caused by:
	I0224 13:30:26.230505  953268 kubeadm.go:310] 		- The kubelet is not running
	I0224 13:30:26.230648  953268 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0224 13:30:26.230661  953268 kubeadm.go:310] 
	I0224 13:30:26.230806  953268 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0224 13:30:26.230857  953268 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0224 13:30:26.230902  953268 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0224 13:30:26.230911  953268 kubeadm.go:310] 
	I0224 13:30:26.231038  953268 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0224 13:30:26.231147  953268 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0224 13:30:26.231163  953268 kubeadm.go:310] 
	I0224 13:30:26.231301  953268 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0224 13:30:26.231435  953268 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0224 13:30:26.231545  953268 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0224 13:30:26.231657  953268 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0224 13:30:26.231675  953268 kubeadm.go:310] 
	I0224 13:30:26.232473  953268 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0224 13:30:26.232591  953268 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0224 13:30:26.232710  953268 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0224 13:30:26.232936  953268 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0224 13:30:26.232991  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0224 13:30:26.704666  953268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 13:30:26.720451  953268 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0224 13:30:26.732280  953268 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0224 13:30:26.732306  953268 kubeadm.go:157] found existing configuration files:
	
	I0224 13:30:26.732371  953268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0224 13:30:26.743971  953268 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0224 13:30:26.744050  953268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0224 13:30:26.755216  953268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0224 13:30:26.766460  953268 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0224 13:30:26.766542  953268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0224 13:30:26.778117  953268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0224 13:30:26.789142  953268 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0224 13:30:26.789208  953268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0224 13:30:26.800621  953268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0224 13:30:26.811672  953268 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0224 13:30:26.811755  953268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0224 13:30:26.823061  953268 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0224 13:30:27.039614  953268 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0224 13:32:23.115672  953268 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0224 13:32:23.115858  953268 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0224 13:32:23.117520  953268 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0224 13:32:23.117626  953268 kubeadm.go:310] [preflight] Running pre-flight checks
	I0224 13:32:23.117831  953268 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0224 13:32:23.118008  953268 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0224 13:32:23.118171  953268 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0224 13:32:23.118281  953268 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0224 13:32:23.120434  953268 out.go:235]   - Generating certificates and keys ...
	I0224 13:32:23.120529  953268 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0224 13:32:23.120621  953268 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0224 13:32:23.120736  953268 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0224 13:32:23.120819  953268 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0224 13:32:23.120905  953268 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0224 13:32:23.120957  953268 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0224 13:32:23.121011  953268 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0224 13:32:23.121066  953268 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0224 13:32:23.121134  953268 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0224 13:32:23.121202  953268 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0224 13:32:23.121237  953268 kubeadm.go:310] [certs] Using the existing "sa" key
	I0224 13:32:23.121355  953268 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0224 13:32:23.121422  953268 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0224 13:32:23.121526  953268 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0224 13:32:23.121602  953268 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0224 13:32:23.121654  953268 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0224 13:32:23.121775  953268 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0224 13:32:23.121914  953268 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0224 13:32:23.121964  953268 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0224 13:32:23.122028  953268 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0224 13:32:23.123732  953268 out.go:235]   - Booting up control plane ...
	I0224 13:32:23.123835  953268 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0224 13:32:23.123904  953268 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0224 13:32:23.123986  953268 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0224 13:32:23.124096  953268 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0224 13:32:23.124279  953268 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0224 13:32:23.124332  953268 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0224 13:32:23.124401  953268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:32:23.124595  953268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:32:23.124691  953268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:32:23.124893  953268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:32:23.124960  953268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:32:23.125150  953268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:32:23.125220  953268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:32:23.125409  953268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:32:23.125508  953268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:32:23.125791  953268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:32:23.125817  953268 kubeadm.go:310] 
	I0224 13:32:23.125871  953268 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0224 13:32:23.125925  953268 kubeadm.go:310] 		timed out waiting for the condition
	I0224 13:32:23.125935  953268 kubeadm.go:310] 
	I0224 13:32:23.125985  953268 kubeadm.go:310] 	This error is likely caused by:
	I0224 13:32:23.126040  953268 kubeadm.go:310] 		- The kubelet is not running
	I0224 13:32:23.126194  953268 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0224 13:32:23.126222  953268 kubeadm.go:310] 
	I0224 13:32:23.126328  953268 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0224 13:32:23.126364  953268 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0224 13:32:23.126411  953268 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0224 13:32:23.126421  953268 kubeadm.go:310] 
	I0224 13:32:23.126543  953268 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0224 13:32:23.126655  953268 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0224 13:32:23.126665  953268 kubeadm.go:310] 
	I0224 13:32:23.126777  953268 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0224 13:32:23.126856  953268 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0224 13:32:23.126925  953268 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0224 13:32:23.127003  953268 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0224 13:32:23.127087  953268 kubeadm.go:310] 
	I0224 13:32:23.127095  953268 kubeadm.go:394] duration metric: took 7m58.850238597s to StartCluster
	I0224 13:32:23.127168  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:32:23.127245  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:32:23.173206  953268 cri.go:89] found id: ""
	I0224 13:32:23.173252  953268 logs.go:282] 0 containers: []
	W0224 13:32:23.173265  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:32:23.173274  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:32:23.173355  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:32:23.220974  953268 cri.go:89] found id: ""
	I0224 13:32:23.221008  953268 logs.go:282] 0 containers: []
	W0224 13:32:23.221017  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:32:23.221024  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:32:23.221095  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:32:23.256282  953268 cri.go:89] found id: ""
	I0224 13:32:23.256316  953268 logs.go:282] 0 containers: []
	W0224 13:32:23.256327  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:32:23.256335  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:32:23.256423  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:32:23.292296  953268 cri.go:89] found id: ""
	I0224 13:32:23.292329  953268 logs.go:282] 0 containers: []
	W0224 13:32:23.292340  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:32:23.292355  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:32:23.292422  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:32:23.328368  953268 cri.go:89] found id: ""
	I0224 13:32:23.328399  953268 logs.go:282] 0 containers: []
	W0224 13:32:23.328408  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:32:23.328414  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:32:23.328488  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:32:23.380963  953268 cri.go:89] found id: ""
	I0224 13:32:23.380995  953268 logs.go:282] 0 containers: []
	W0224 13:32:23.381005  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:32:23.381014  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:32:23.381083  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:32:23.448170  953268 cri.go:89] found id: ""
	I0224 13:32:23.448206  953268 logs.go:282] 0 containers: []
	W0224 13:32:23.448219  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:32:23.448227  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:32:23.448301  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:32:23.494938  953268 cri.go:89] found id: ""
	I0224 13:32:23.494969  953268 logs.go:282] 0 containers: []
	W0224 13:32:23.494978  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:32:23.494989  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:32:23.495004  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:32:23.545770  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:32:23.545817  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:32:23.561559  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:32:23.561608  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:32:23.639942  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:32:23.639969  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:32:23.639983  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:32:23.748671  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:32:23.748715  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0224 13:32:23.790465  953268 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0224 13:32:23.790543  953268 out.go:270] * 
	W0224 13:32:23.790632  953268 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0224 13:32:23.790650  953268 out.go:270] * 
	W0224 13:32:23.791585  953268 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0224 13:32:23.796216  953268 out.go:201] 
	W0224 13:32:23.797430  953268 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0224 13:32:23.797505  953268 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0224 13:32:23.797547  953268 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0224 13:32:23.799102  953268 out.go:201] 
	
	
	==> CRI-O <==
	Feb 24 13:41:26 old-k8s-version-233759 crio[626]: time="2025-02-24 13:41:26.363020130Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1740404486362978806,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6d772604-a1e3-43aa-91dd-efb83fd90788 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 13:41:26 old-k8s-version-233759 crio[626]: time="2025-02-24 13:41:26.363604246Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=82473887-9b62-4bed-ad37-c30903cc9b87 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:41:26 old-k8s-version-233759 crio[626]: time="2025-02-24 13:41:26.363668579Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=82473887-9b62-4bed-ad37-c30903cc9b87 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:41:26 old-k8s-version-233759 crio[626]: time="2025-02-24 13:41:26.363705376Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=82473887-9b62-4bed-ad37-c30903cc9b87 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:41:26 old-k8s-version-233759 crio[626]: time="2025-02-24 13:41:26.401987889Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8df6f0ee-ed8b-4c81-a6d0-8250f7989679 name=/runtime.v1.RuntimeService/Version
	Feb 24 13:41:26 old-k8s-version-233759 crio[626]: time="2025-02-24 13:41:26.402071835Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8df6f0ee-ed8b-4c81-a6d0-8250f7989679 name=/runtime.v1.RuntimeService/Version
	Feb 24 13:41:26 old-k8s-version-233759 crio[626]: time="2025-02-24 13:41:26.403635886Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=181d8133-4733-44bc-8d46-220f3fddbc9c name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 13:41:26 old-k8s-version-233759 crio[626]: time="2025-02-24 13:41:26.404141589Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1740404486404108342,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=181d8133-4733-44bc-8d46-220f3fddbc9c name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 13:41:26 old-k8s-version-233759 crio[626]: time="2025-02-24 13:41:26.404647215Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a3928383-b970-47e9-a163-6a40490e6124 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:41:26 old-k8s-version-233759 crio[626]: time="2025-02-24 13:41:26.404722642Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a3928383-b970-47e9-a163-6a40490e6124 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:41:26 old-k8s-version-233759 crio[626]: time="2025-02-24 13:41:26.404811984Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a3928383-b970-47e9-a163-6a40490e6124 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:41:26 old-k8s-version-233759 crio[626]: time="2025-02-24 13:41:26.439243605Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=802f1c18-119b-4fc3-99e0-d2c7d3b03ad5 name=/runtime.v1.RuntimeService/Version
	Feb 24 13:41:26 old-k8s-version-233759 crio[626]: time="2025-02-24 13:41:26.439318908Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=802f1c18-119b-4fc3-99e0-d2c7d3b03ad5 name=/runtime.v1.RuntimeService/Version
	Feb 24 13:41:26 old-k8s-version-233759 crio[626]: time="2025-02-24 13:41:26.440379288Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=300e9980-87c0-4674-99bb-31759a5530f4 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 13:41:26 old-k8s-version-233759 crio[626]: time="2025-02-24 13:41:26.440827242Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1740404486440748145,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=300e9980-87c0-4674-99bb-31759a5530f4 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 13:41:26 old-k8s-version-233759 crio[626]: time="2025-02-24 13:41:26.441265216Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bea35df7-16c9-41e2-a0ce-c30895efff13 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:41:26 old-k8s-version-233759 crio[626]: time="2025-02-24 13:41:26.441309218Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bea35df7-16c9-41e2-a0ce-c30895efff13 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:41:26 old-k8s-version-233759 crio[626]: time="2025-02-24 13:41:26.441349316Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=bea35df7-16c9-41e2-a0ce-c30895efff13 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:41:26 old-k8s-version-233759 crio[626]: time="2025-02-24 13:41:26.474864961Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=69472e50-4d66-44e6-9790-ae82338bc4ff name=/runtime.v1.RuntimeService/Version
	Feb 24 13:41:26 old-k8s-version-233759 crio[626]: time="2025-02-24 13:41:26.474949883Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=69472e50-4d66-44e6-9790-ae82338bc4ff name=/runtime.v1.RuntimeService/Version
	Feb 24 13:41:26 old-k8s-version-233759 crio[626]: time="2025-02-24 13:41:26.476565010Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c6ead8e7-5039-4b38-8599-031205108330 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 13:41:26 old-k8s-version-233759 crio[626]: time="2025-02-24 13:41:26.477091099Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1740404486477067427,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c6ead8e7-5039-4b38-8599-031205108330 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 13:41:26 old-k8s-version-233759 crio[626]: time="2025-02-24 13:41:26.477697073Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6570ff3c-daa2-44b4-90b2-745b286533a3 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:41:26 old-k8s-version-233759 crio[626]: time="2025-02-24 13:41:26.477807044Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6570ff3c-daa2-44b4-90b2-745b286533a3 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:41:26 old-k8s-version-233759 crio[626]: time="2025-02-24 13:41:26.477853073Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6570ff3c-daa2-44b4-90b2-745b286533a3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Feb24 13:23] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054709] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042708] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Feb24 13:24] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.133792] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.074673] overlayfs: failed to resolve '/var/lib/containers/storage/overlay/opaque-bug-check3889635992/l1': -2
	[  +0.613262] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.739778] systemd-fstab-generator[553]: Ignoring "noauto" option for root device
	[  +0.062960] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072258] systemd-fstab-generator[565]: Ignoring "noauto" option for root device
	[  +0.214347] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.136588] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.281511] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +7.250646] systemd-fstab-generator[883]: Ignoring "noauto" option for root device
	[  +0.068712] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.282155] systemd-fstab-generator[1007]: Ignoring "noauto" option for root device
	[ +12.331271] kauditd_printk_skb: 46 callbacks suppressed
	[Feb24 13:28] systemd-fstab-generator[4979]: Ignoring "noauto" option for root device
	[Feb24 13:30] systemd-fstab-generator[5261]: Ignoring "noauto" option for root device
	[  +0.064430] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 13:41:26 up 17 min,  0 users,  load average: 0.17, 0.05, 0.03
	Linux old-k8s-version-233759 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Feb 24 13:41:23 old-k8s-version-233759 kubelet[6430]: net/http.(*Transport).dialConn(0xc000831540, 0x4f7fe00, 0xc000120018, 0x0, 0xc000b90300, 0x5, 0xc000a00a50, 0x24, 0x0, 0xc000a4b680, ...)
	Feb 24 13:41:23 old-k8s-version-233759 kubelet[6430]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Feb 24 13:41:23 old-k8s-version-233759 kubelet[6430]: net/http.(*Transport).dialConnFor(0xc000831540, 0xc000a77a20)
	Feb 24 13:41:23 old-k8s-version-233759 kubelet[6430]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Feb 24 13:41:23 old-k8s-version-233759 kubelet[6430]: created by net/http.(*Transport).queueForDial
	Feb 24 13:41:23 old-k8s-version-233759 kubelet[6430]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Feb 24 13:41:23 old-k8s-version-233759 kubelet[6430]: goroutine 164 [select]:
	Feb 24 13:41:23 old-k8s-version-233759 kubelet[6430]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc0005a55e0, 0x1, 0x0, 0x0, 0x0, 0x0)
	Feb 24 13:41:23 old-k8s-version-233759 kubelet[6430]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Feb 24 13:41:23 old-k8s-version-233759 kubelet[6430]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000bbcc00, 0x0, 0x0)
	Feb 24 13:41:23 old-k8s-version-233759 kubelet[6430]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Feb 24 13:41:23 old-k8s-version-233759 kubelet[6430]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc000221c00)
	Feb 24 13:41:23 old-k8s-version-233759 kubelet[6430]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Feb 24 13:41:23 old-k8s-version-233759 kubelet[6430]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Feb 24 13:41:23 old-k8s-version-233759 kubelet[6430]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Feb 24 13:41:23 old-k8s-version-233759 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 24 13:41:23 old-k8s-version-233759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 24 13:41:24 old-k8s-version-233759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Feb 24 13:41:24 old-k8s-version-233759 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 24 13:41:24 old-k8s-version-233759 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 24 13:41:24 old-k8s-version-233759 kubelet[6440]: I0224 13:41:24.670297    6440 server.go:416] Version: v1.20.0
	Feb 24 13:41:24 old-k8s-version-233759 kubelet[6440]: I0224 13:41:24.671064    6440 server.go:837] Client rotation is on, will bootstrap in background
	Feb 24 13:41:24 old-k8s-version-233759 kubelet[6440]: I0224 13:41:24.673733    6440 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 24 13:41:24 old-k8s-version-233759 kubelet[6440]: W0224 13:41:24.675259    6440 manager.go:159] Cannot detect current cgroup on cgroup v2
	Feb 24 13:41:24 old-k8s-version-233759 kubelet[6440]: I0224 13:41:24.675998    6440 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-233759 -n old-k8s-version-233759
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-233759 -n old-k8s-version-233759: exit status 2 (233.694777ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-233759" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (350.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
E0224 13:41:46.849534  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/functional-892991/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
E0224 13:41:47.049957  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/custom-flannel-799329/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
E0224 13:42:30.949195  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/enable-default-cni-799329/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
E0224 13:42:43.833595  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/flannel-799329/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
E0224 13:43:28.218452  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/bridge-799329/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
E0224 13:43:54.808824  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/auto-799329/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
E0224 13:44:12.768817  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
E0224 13:44:49.925976  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/functional-892991/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
E0224 13:44:50.623887  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/no-preload-956442/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
E0224 13:44:57.776980  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/default-k8s-diff-port-108648/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
E0224 13:45:23.723879  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kindnet-799329/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
E0224 13:46:04.068000  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/calico-799329/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
E0224 13:46:13.692000  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/no-preload-956442/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
E0224 13:46:20.842967  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/default-k8s-diff-port-108648/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
E0224 13:46:46.849456  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/functional-892991/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
E0224 13:46:47.050021  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/custom-flannel-799329/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.62:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.62:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-233759 -n old-k8s-version-233759
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-233759 -n old-k8s-version-233759: exit status 2 (240.27353ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "old-k8s-version-233759" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-233759 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context old-k8s-version-233759 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.46µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-233759 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-233759 -n old-k8s-version-233759
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-233759 -n old-k8s-version-233759: exit status 2 (237.392608ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-233759 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| image   | embed-certs-037381 image list                          | embed-certs-037381           | jenkins | v1.35.0 | 24 Feb 25 13:27 UTC | 24 Feb 25 13:27 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-037381                                  | embed-certs-037381           | jenkins | v1.35.0 | 24 Feb 25 13:27 UTC | 24 Feb 25 13:27 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-037381                                  | embed-certs-037381           | jenkins | v1.35.0 | 24 Feb 25 13:27 UTC | 24 Feb 25 13:27 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-037381                                  | embed-certs-037381           | jenkins | v1.35.0 | 24 Feb 25 13:27 UTC | 24 Feb 25 13:27 UTC |
	| delete  | -p embed-certs-037381                                  | embed-certs-037381           | jenkins | v1.35.0 | 24 Feb 25 13:27 UTC | 24 Feb 25 13:27 UTC |
	| start   | -p newest-cni-651381 --memory=2200 --alsologtostderr   | newest-cni-651381            | jenkins | v1.35.0 | 24 Feb 25 13:27 UTC | 24 Feb 25 13:28 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| image   | no-preload-956442 image list                           | no-preload-956442            | jenkins | v1.35.0 | 24 Feb 25 13:27 UTC | 24 Feb 25 13:27 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-956442                                   | no-preload-956442            | jenkins | v1.35.0 | 24 Feb 25 13:27 UTC | 24 Feb 25 13:27 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-956442                                   | no-preload-956442            | jenkins | v1.35.0 | 24 Feb 25 13:27 UTC | 24 Feb 25 13:27 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-956442                                   | no-preload-956442            | jenkins | v1.35.0 | 24 Feb 25 13:27 UTC | 24 Feb 25 13:27 UTC |
	| delete  | -p no-preload-956442                                   | no-preload-956442            | jenkins | v1.35.0 | 24 Feb 25 13:27 UTC | 24 Feb 25 13:27 UTC |
	| image   | default-k8s-diff-port-108648                           | default-k8s-diff-port-108648 | jenkins | v1.35.0 | 24 Feb 25 13:27 UTC | 24 Feb 25 13:27 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-108648 | jenkins | v1.35.0 | 24 Feb 25 13:27 UTC | 24 Feb 25 13:27 UTC |
	|         | default-k8s-diff-port-108648                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-108648 | jenkins | v1.35.0 | 24 Feb 25 13:27 UTC | 24 Feb 25 13:27 UTC |
	|         | default-k8s-diff-port-108648                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-108648 | jenkins | v1.35.0 | 24 Feb 25 13:27 UTC | 24 Feb 25 13:27 UTC |
	|         | default-k8s-diff-port-108648                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-108648 | jenkins | v1.35.0 | 24 Feb 25 13:27 UTC | 24 Feb 25 13:27 UTC |
	|         | default-k8s-diff-port-108648                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-651381             | newest-cni-651381            | jenkins | v1.35.0 | 24 Feb 25 13:28 UTC | 24 Feb 25 13:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-651381                                   | newest-cni-651381            | jenkins | v1.35.0 | 24 Feb 25 13:28 UTC | 24 Feb 25 13:28 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-651381                  | newest-cni-651381            | jenkins | v1.35.0 | 24 Feb 25 13:28 UTC | 24 Feb 25 13:28 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-651381 --memory=2200 --alsologtostderr   | newest-cni-651381            | jenkins | v1.35.0 | 24 Feb 25 13:28 UTC | 24 Feb 25 13:29 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| image   | newest-cni-651381 image list                           | newest-cni-651381            | jenkins | v1.35.0 | 24 Feb 25 13:29 UTC | 24 Feb 25 13:29 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-651381                                   | newest-cni-651381            | jenkins | v1.35.0 | 24 Feb 25 13:29 UTC | 24 Feb 25 13:29 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-651381                                   | newest-cni-651381            | jenkins | v1.35.0 | 24 Feb 25 13:29 UTC | 24 Feb 25 13:29 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-651381                                   | newest-cni-651381            | jenkins | v1.35.0 | 24 Feb 25 13:29 UTC | 24 Feb 25 13:29 UTC |
	| delete  | -p newest-cni-651381                                   | newest-cni-651381            | jenkins | v1.35.0 | 24 Feb 25 13:29 UTC | 24 Feb 25 13:29 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/24 13:28:38
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0224 13:28:38.792971  956077 out.go:345] Setting OutFile to fd 1 ...
	I0224 13:28:38.793077  956077 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 13:28:38.793085  956077 out.go:358] Setting ErrFile to fd 2...
	I0224 13:28:38.793089  956077 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 13:28:38.793277  956077 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-887294/.minikube/bin
	I0224 13:28:38.793883  956077 out.go:352] Setting JSON to false
	I0224 13:28:38.794844  956077 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":11460,"bootTime":1740392259,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0224 13:28:38.794956  956077 start.go:139] virtualization: kvm guest
	I0224 13:28:38.797461  956077 out.go:177] * [newest-cni-651381] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0224 13:28:38.798901  956077 out.go:177]   - MINIKUBE_LOCATION=20451
	I0224 13:28:38.798939  956077 notify.go:220] Checking for updates...
	I0224 13:28:38.801509  956077 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 13:28:38.802725  956077 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20451-887294/kubeconfig
	I0224 13:28:38.804035  956077 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-887294/.minikube
	I0224 13:28:38.805462  956077 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0224 13:28:38.806731  956077 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0224 13:28:38.808519  956077 config.go:182] Loaded profile config "newest-cni-651381": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0224 13:28:38.808929  956077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:28:38.808983  956077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:28:38.824230  956077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33847
	I0224 13:28:38.824657  956077 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:28:38.825223  956077 main.go:141] libmachine: Using API Version  1
	I0224 13:28:38.825247  956077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:28:38.825706  956077 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:28:38.825963  956077 main.go:141] libmachine: (newest-cni-651381) Calling .DriverName
	I0224 13:28:38.826250  956077 driver.go:394] Setting default libvirt URI to qemu:///system
	I0224 13:28:38.826574  956077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:28:38.826623  956077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:28:38.841716  956077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37847
	I0224 13:28:38.842131  956077 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:28:38.842597  956077 main.go:141] libmachine: Using API Version  1
	I0224 13:28:38.842619  956077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:28:38.842935  956077 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:28:38.843142  956077 main.go:141] libmachine: (newest-cni-651381) Calling .DriverName
	I0224 13:28:38.879934  956077 out.go:177] * Using the kvm2 driver based on existing profile
	I0224 13:28:38.881238  956077 start.go:297] selected driver: kvm2
	I0224 13:28:38.881261  956077 start.go:901] validating driver "kvm2" against &{Name:newest-cni-651381 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:newest-cni-651381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPort
s:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0224 13:28:38.881430  956077 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0224 13:28:38.882088  956077 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 13:28:38.882170  956077 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20451-887294/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0224 13:28:38.897736  956077 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0224 13:28:38.898150  956077 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0224 13:28:38.898189  956077 cni.go:84] Creating CNI manager for ""
	I0224 13:28:38.898247  956077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0224 13:28:38.898285  956077 start.go:340] cluster config:
	{Name:newest-cni-651381 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-651381 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0224 13:28:38.898383  956077 iso.go:125] acquiring lock: {Name:mk57408cca66a96a13d93cda43cdfac6e61aef3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 13:28:38.900247  956077 out.go:177] * Starting "newest-cni-651381" primary control-plane node in "newest-cni-651381" cluster
	I0224 13:28:38.901467  956077 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0224 13:28:38.901516  956077 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20451-887294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0224 13:28:38.901527  956077 cache.go:56] Caching tarball of preloaded images
	I0224 13:28:38.901613  956077 preload.go:172] Found /home/jenkins/minikube-integration/20451-887294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0224 13:28:38.901623  956077 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0224 13:28:38.901723  956077 profile.go:143] Saving config to /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/newest-cni-651381/config.json ...
	I0224 13:28:38.901897  956077 start.go:360] acquireMachinesLock for newest-cni-651381: {Name:mk023761b01bb629a1acd40bc8104cc517b0e15b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0224 13:28:38.901940  956077 start.go:364] duration metric: took 22.052µs to acquireMachinesLock for "newest-cni-651381"
	I0224 13:28:38.901954  956077 start.go:96] Skipping create...Using existing machine configuration
	I0224 13:28:38.901962  956077 fix.go:54] fixHost starting: 
	I0224 13:28:38.902241  956077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:28:38.902287  956077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:28:38.917188  956077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45863
	I0224 13:28:38.917773  956077 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:28:38.918380  956077 main.go:141] libmachine: Using API Version  1
	I0224 13:28:38.918452  956077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:28:38.918772  956077 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:28:38.918951  956077 main.go:141] libmachine: (newest-cni-651381) Calling .DriverName
	I0224 13:28:38.919074  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetState
	I0224 13:28:38.920729  956077 fix.go:112] recreateIfNeeded on newest-cni-651381: state=Stopped err=<nil>
	I0224 13:28:38.920774  956077 main.go:141] libmachine: (newest-cni-651381) Calling .DriverName
	W0224 13:28:38.920911  956077 fix.go:138] unexpected machine state, will restart: <nil>
	I0224 13:28:38.922862  956077 out.go:177] * Restarting existing kvm2 VM for "newest-cni-651381" ...
	I0224 13:28:38.924182  956077 main.go:141] libmachine: (newest-cni-651381) Calling .Start
	I0224 13:28:38.924366  956077 main.go:141] libmachine: (newest-cni-651381) starting domain...
	I0224 13:28:38.924388  956077 main.go:141] libmachine: (newest-cni-651381) ensuring networks are active...
	I0224 13:28:38.925130  956077 main.go:141] libmachine: (newest-cni-651381) Ensuring network default is active
	I0224 13:28:38.925476  956077 main.go:141] libmachine: (newest-cni-651381) Ensuring network mk-newest-cni-651381 is active
	I0224 13:28:38.925802  956077 main.go:141] libmachine: (newest-cni-651381) getting domain XML...
	I0224 13:28:38.926703  956077 main.go:141] libmachine: (newest-cni-651381) creating domain...
	I0224 13:28:40.156271  956077 main.go:141] libmachine: (newest-cni-651381) waiting for IP...
	I0224 13:28:40.157205  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:40.157681  956077 main.go:141] libmachine: (newest-cni-651381) DBG | unable to find current IP address of domain newest-cni-651381 in network mk-newest-cni-651381
	I0224 13:28:40.157772  956077 main.go:141] libmachine: (newest-cni-651381) DBG | I0224 13:28:40.157685  956112 retry.go:31] will retry after 260.668185ms: waiting for domain to come up
	I0224 13:28:40.420311  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:40.420800  956077 main.go:141] libmachine: (newest-cni-651381) DBG | unable to find current IP address of domain newest-cni-651381 in network mk-newest-cni-651381
	I0224 13:28:40.420848  956077 main.go:141] libmachine: (newest-cni-651381) DBG | I0224 13:28:40.420767  956112 retry.go:31] will retry after 303.764677ms: waiting for domain to come up
	I0224 13:28:40.726666  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:40.727228  956077 main.go:141] libmachine: (newest-cni-651381) DBG | unable to find current IP address of domain newest-cni-651381 in network mk-newest-cni-651381
	I0224 13:28:40.727281  956077 main.go:141] libmachine: (newest-cni-651381) DBG | I0224 13:28:40.727200  956112 retry.go:31] will retry after 355.373964ms: waiting for domain to come up
	I0224 13:28:41.083712  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:41.084293  956077 main.go:141] libmachine: (newest-cni-651381) DBG | unable to find current IP address of domain newest-cni-651381 in network mk-newest-cni-651381
	I0224 13:28:41.084350  956077 main.go:141] libmachine: (newest-cni-651381) DBG | I0224 13:28:41.084276  956112 retry.go:31] will retry after 470.293336ms: waiting for domain to come up
	I0224 13:28:41.556004  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:41.556503  956077 main.go:141] libmachine: (newest-cni-651381) DBG | unable to find current IP address of domain newest-cni-651381 in network mk-newest-cni-651381
	I0224 13:28:41.556533  956077 main.go:141] libmachine: (newest-cni-651381) DBG | I0224 13:28:41.556435  956112 retry.go:31] will retry after 528.413702ms: waiting for domain to come up
	I0224 13:28:42.086215  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:42.086654  956077 main.go:141] libmachine: (newest-cni-651381) DBG | unable to find current IP address of domain newest-cni-651381 in network mk-newest-cni-651381
	I0224 13:28:42.086688  956077 main.go:141] libmachine: (newest-cni-651381) DBG | I0224 13:28:42.086615  956112 retry.go:31] will retry after 758.532968ms: waiting for domain to come up
	I0224 13:28:42.846682  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:42.847289  956077 main.go:141] libmachine: (newest-cni-651381) DBG | unable to find current IP address of domain newest-cni-651381 in network mk-newest-cni-651381
	I0224 13:28:42.847316  956077 main.go:141] libmachine: (newest-cni-651381) DBG | I0224 13:28:42.847249  956112 retry.go:31] will retry after 771.163995ms: waiting for domain to come up
	I0224 13:28:43.620325  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:43.620953  956077 main.go:141] libmachine: (newest-cni-651381) DBG | unable to find current IP address of domain newest-cni-651381 in network mk-newest-cni-651381
	I0224 13:28:43.620987  956077 main.go:141] libmachine: (newest-cni-651381) DBG | I0224 13:28:43.620927  956112 retry.go:31] will retry after 1.349772038s: waiting for domain to come up
	I0224 13:28:44.971949  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:44.972514  956077 main.go:141] libmachine: (newest-cni-651381) DBG | unable to find current IP address of domain newest-cni-651381 in network mk-newest-cni-651381
	I0224 13:28:44.972544  956077 main.go:141] libmachine: (newest-cni-651381) DBG | I0224 13:28:44.972446  956112 retry.go:31] will retry after 1.187923617s: waiting for domain to come up
	I0224 13:28:46.161965  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:46.162503  956077 main.go:141] libmachine: (newest-cni-651381) DBG | unable to find current IP address of domain newest-cni-651381 in network mk-newest-cni-651381
	I0224 13:28:46.162523  956077 main.go:141] libmachine: (newest-cni-651381) DBG | I0224 13:28:46.162464  956112 retry.go:31] will retry after 2.129619904s: waiting for domain to come up
	I0224 13:28:48.294708  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:48.295258  956077 main.go:141] libmachine: (newest-cni-651381) DBG | unable to find current IP address of domain newest-cni-651381 in network mk-newest-cni-651381
	I0224 13:28:48.295292  956077 main.go:141] libmachine: (newest-cni-651381) DBG | I0224 13:28:48.295208  956112 retry.go:31] will retry after 2.033415833s: waiting for domain to come up
	I0224 13:28:50.330158  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:50.330661  956077 main.go:141] libmachine: (newest-cni-651381) DBG | unable to find current IP address of domain newest-cni-651381 in network mk-newest-cni-651381
	I0224 13:28:50.330693  956077 main.go:141] libmachine: (newest-cni-651381) DBG | I0224 13:28:50.330607  956112 retry.go:31] will retry after 3.415912416s: waiting for domain to come up
	I0224 13:28:53.750421  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:53.750924  956077 main.go:141] libmachine: (newest-cni-651381) DBG | unable to find current IP address of domain newest-cni-651381 in network mk-newest-cni-651381
	I0224 13:28:53.750982  956077 main.go:141] libmachine: (newest-cni-651381) DBG | I0224 13:28:53.750908  956112 retry.go:31] will retry after 3.200463394s: waiting for domain to come up
	I0224 13:28:56.955224  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:56.955868  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has current primary IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:56.955897  956077 main.go:141] libmachine: (newest-cni-651381) found domain IP: 192.168.39.43
	I0224 13:28:56.955914  956077 main.go:141] libmachine: (newest-cni-651381) reserving static IP address...
	I0224 13:28:56.956419  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "newest-cni-651381", mac: "52:54:00:1b:98:b8", ip: "192.168.39.43"} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:28:56.956465  956077 main.go:141] libmachine: (newest-cni-651381) DBG | skip adding static IP to network mk-newest-cni-651381 - found existing host DHCP lease matching {name: "newest-cni-651381", mac: "52:54:00:1b:98:b8", ip: "192.168.39.43"}
	I0224 13:28:56.956483  956077 main.go:141] libmachine: (newest-cni-651381) reserved static IP address 192.168.39.43 for domain newest-cni-651381
	I0224 13:28:56.956496  956077 main.go:141] libmachine: (newest-cni-651381) waiting for SSH...
	I0224 13:28:56.956507  956077 main.go:141] libmachine: (newest-cni-651381) DBG | Getting to WaitForSSH function...
	I0224 13:28:56.959046  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:56.959392  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:28:56.959427  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:56.959538  956077 main.go:141] libmachine: (newest-cni-651381) DBG | Using SSH client type: external
	I0224 13:28:56.959564  956077 main.go:141] libmachine: (newest-cni-651381) DBG | Using SSH private key: /home/jenkins/minikube-integration/20451-887294/.minikube/machines/newest-cni-651381/id_rsa (-rw-------)
	I0224 13:28:56.959630  956077 main.go:141] libmachine: (newest-cni-651381) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.43 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20451-887294/.minikube/machines/newest-cni-651381/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0224 13:28:56.959653  956077 main.go:141] libmachine: (newest-cni-651381) DBG | About to run SSH command:
	I0224 13:28:56.959689  956077 main.go:141] libmachine: (newest-cni-651381) DBG | exit 0
	I0224 13:28:57.089584  956077 main.go:141] libmachine: (newest-cni-651381) DBG | SSH cmd err, output: <nil>: 
	I0224 13:28:57.089980  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetConfigRaw
	I0224 13:28:57.090668  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetIP
	I0224 13:28:57.093149  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:57.093555  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:28:57.093576  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:57.093814  956077 profile.go:143] Saving config to /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/newest-cni-651381/config.json ...
	I0224 13:28:57.094015  956077 machine.go:93] provisionDockerMachine start ...
	I0224 13:28:57.094035  956077 main.go:141] libmachine: (newest-cni-651381) Calling .DriverName
	I0224 13:28:57.094293  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHHostname
	I0224 13:28:57.096640  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:57.097039  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:28:57.097068  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:57.097149  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHPort
	I0224 13:28:57.097351  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:28:57.097496  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:28:57.097643  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHUsername
	I0224 13:28:57.097810  956077 main.go:141] libmachine: Using SSH client type: native
	I0224 13:28:57.098046  956077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0224 13:28:57.098063  956077 main.go:141] libmachine: About to run SSH command:
	hostname
	I0224 13:28:57.218057  956077 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0224 13:28:57.218090  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetMachineName
	I0224 13:28:57.218365  956077 buildroot.go:166] provisioning hostname "newest-cni-651381"
	I0224 13:28:57.218404  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetMachineName
	I0224 13:28:57.218597  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHHostname
	I0224 13:28:57.221391  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:57.221750  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:28:57.221778  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:57.221974  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHPort
	I0224 13:28:57.222142  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:28:57.222294  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:28:57.222392  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHUsername
	I0224 13:28:57.222531  956077 main.go:141] libmachine: Using SSH client type: native
	I0224 13:28:57.222718  956077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0224 13:28:57.222731  956077 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-651381 && echo "newest-cni-651381" | sudo tee /etc/hostname
	I0224 13:28:57.354081  956077 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-651381
	
	I0224 13:28:57.354129  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHHostname
	I0224 13:28:57.357103  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:57.357516  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:28:57.357552  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:57.357765  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHPort
	I0224 13:28:57.357998  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:28:57.358156  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:28:57.358339  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHUsername
	I0224 13:28:57.358627  956077 main.go:141] libmachine: Using SSH client type: native
	I0224 13:28:57.358827  956077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0224 13:28:57.358843  956077 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-651381' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-651381/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-651381' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0224 13:28:57.483573  956077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0224 13:28:57.483608  956077 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20451-887294/.minikube CaCertPath:/home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20451-887294/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20451-887294/.minikube}
	I0224 13:28:57.483657  956077 buildroot.go:174] setting up certificates
	I0224 13:28:57.483671  956077 provision.go:84] configureAuth start
	I0224 13:28:57.483688  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetMachineName
	I0224 13:28:57.484035  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetIP
	I0224 13:28:57.486755  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:57.487062  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:28:57.487093  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:57.487216  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHHostname
	I0224 13:28:57.489282  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:57.489619  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:28:57.489647  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:57.489808  956077 provision.go:143] copyHostCerts
	I0224 13:28:57.489880  956077 exec_runner.go:144] found /home/jenkins/minikube-integration/20451-887294/.minikube/ca.pem, removing ...
	I0224 13:28:57.489894  956077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20451-887294/.minikube/ca.pem
	I0224 13:28:57.489977  956077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20451-887294/.minikube/ca.pem (1082 bytes)
	I0224 13:28:57.490110  956077 exec_runner.go:144] found /home/jenkins/minikube-integration/20451-887294/.minikube/cert.pem, removing ...
	I0224 13:28:57.490121  956077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20451-887294/.minikube/cert.pem
	I0224 13:28:57.490161  956077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20451-887294/.minikube/cert.pem (1123 bytes)
	I0224 13:28:57.490254  956077 exec_runner.go:144] found /home/jenkins/minikube-integration/20451-887294/.minikube/key.pem, removing ...
	I0224 13:28:57.490264  956077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20451-887294/.minikube/key.pem
	I0224 13:28:57.490300  956077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20451-887294/.minikube/key.pem (1679 bytes)
	I0224 13:28:57.490392  956077 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20451-887294/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca-key.pem org=jenkins.newest-cni-651381 san=[127.0.0.1 192.168.39.43 localhost minikube newest-cni-651381]
	I0224 13:28:57.603657  956077 provision.go:177] copyRemoteCerts
	I0224 13:28:57.603728  956077 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0224 13:28:57.603756  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHHostname
	I0224 13:28:57.606668  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:57.607001  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:28:57.607035  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:57.607186  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHPort
	I0224 13:28:57.607409  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:28:57.607596  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHUsername
	I0224 13:28:57.607747  956077 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/newest-cni-651381/id_rsa Username:docker}
	I0224 13:28:57.696271  956077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0224 13:28:57.720966  956077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0224 13:28:57.745080  956077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0224 13:28:57.770570  956077 provision.go:87] duration metric: took 286.877496ms to configureAuth
	I0224 13:28:57.770610  956077 buildroot.go:189] setting minikube options for container-runtime
	I0224 13:28:57.770819  956077 config.go:182] Loaded profile config "newest-cni-651381": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0224 13:28:57.770914  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHHostname
	I0224 13:28:57.773830  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:57.774134  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:28:57.774182  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:57.774374  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHPort
	I0224 13:28:57.774576  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:28:57.774725  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:28:57.774844  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHUsername
	I0224 13:28:57.774994  956077 main.go:141] libmachine: Using SSH client type: native
	I0224 13:28:57.775210  956077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0224 13:28:57.775229  956077 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0224 13:28:58.015198  956077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0224 13:28:58.015231  956077 machine.go:96] duration metric: took 921.200919ms to provisionDockerMachine
	I0224 13:28:58.015248  956077 start.go:293] postStartSetup for "newest-cni-651381" (driver="kvm2")
	I0224 13:28:58.015261  956077 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0224 13:28:58.015323  956077 main.go:141] libmachine: (newest-cni-651381) Calling .DriverName
	I0224 13:28:58.015781  956077 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0224 13:28:58.015825  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHHostname
	I0224 13:28:58.018588  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:58.018934  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:28:58.018957  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:58.019113  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHPort
	I0224 13:28:58.019321  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:28:58.019495  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHUsername
	I0224 13:28:58.019655  956077 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/newest-cni-651381/id_rsa Username:docker}
	I0224 13:28:58.108667  956077 ssh_runner.go:195] Run: cat /etc/os-release
	I0224 13:28:58.113192  956077 info.go:137] Remote host: Buildroot 2023.02.9
	I0224 13:28:58.113221  956077 filesync.go:126] Scanning /home/jenkins/minikube-integration/20451-887294/.minikube/addons for local assets ...
	I0224 13:28:58.113289  956077 filesync.go:126] Scanning /home/jenkins/minikube-integration/20451-887294/.minikube/files for local assets ...
	I0224 13:28:58.113387  956077 filesync.go:149] local asset: /home/jenkins/minikube-integration/20451-887294/.minikube/files/etc/ssl/certs/8945642.pem -> 8945642.pem in /etc/ssl/certs
	I0224 13:28:58.113476  956077 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0224 13:28:58.123292  956077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/files/etc/ssl/certs/8945642.pem --> /etc/ssl/certs/8945642.pem (1708 bytes)
	I0224 13:28:58.150288  956077 start.go:296] duration metric: took 135.022634ms for postStartSetup
	I0224 13:28:58.150340  956077 fix.go:56] duration metric: took 19.248378049s for fixHost
	I0224 13:28:58.150364  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHHostname
	I0224 13:28:58.152951  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:58.153283  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:28:58.153338  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:58.153514  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHPort
	I0224 13:28:58.153706  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:28:58.153862  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:28:58.154044  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHUsername
	I0224 13:28:58.154233  956077 main.go:141] libmachine: Using SSH client type: native
	I0224 13:28:58.154467  956077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0224 13:28:58.154479  956077 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0224 13:28:58.270588  956077 main.go:141] libmachine: SSH cmd err, output: <nil>: 1740403738.235399997
	
	I0224 13:28:58.270619  956077 fix.go:216] guest clock: 1740403738.235399997
	I0224 13:28:58.270629  956077 fix.go:229] Guest: 2025-02-24 13:28:58.235399997 +0000 UTC Remote: 2025-02-24 13:28:58.150345054 +0000 UTC m=+19.397261834 (delta=85.054943ms)
	I0224 13:28:58.270676  956077 fix.go:200] guest clock delta is within tolerance: 85.054943ms
	I0224 13:28:58.270685  956077 start.go:83] releasing machines lock for "newest-cni-651381", held for 19.368735573s
	I0224 13:28:58.270712  956077 main.go:141] libmachine: (newest-cni-651381) Calling .DriverName
	I0224 13:28:58.271039  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetIP
	I0224 13:28:58.273607  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:58.274111  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:28:58.274137  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:58.274333  956077 main.go:141] libmachine: (newest-cni-651381) Calling .DriverName
	I0224 13:28:58.274936  956077 main.go:141] libmachine: (newest-cni-651381) Calling .DriverName
	I0224 13:28:58.275139  956077 main.go:141] libmachine: (newest-cni-651381) Calling .DriverName
	I0224 13:28:58.275266  956077 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0224 13:28:58.275326  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHHostname
	I0224 13:28:58.275372  956077 ssh_runner.go:195] Run: cat /version.json
	I0224 13:28:58.275401  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHHostname
	I0224 13:28:58.278276  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:58.278682  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:28:58.278713  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:58.278732  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:58.278841  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHPort
	I0224 13:28:58.279035  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:28:58.279101  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:28:58.279129  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:58.279314  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHPort
	I0224 13:28:58.279344  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHUsername
	I0224 13:28:58.279459  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:28:58.279555  956077 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/newest-cni-651381/id_rsa Username:docker}
	I0224 13:28:58.279604  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHUsername
	I0224 13:28:58.279716  956077 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/newest-cni-651381/id_rsa Username:docker}
	I0224 13:28:58.363123  956077 ssh_runner.go:195] Run: systemctl --version
	I0224 13:28:58.385513  956077 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0224 13:28:58.537461  956077 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0224 13:28:58.543840  956077 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0224 13:28:58.543916  956077 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0224 13:28:58.562167  956077 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0224 13:28:58.562203  956077 start.go:495] detecting cgroup driver to use...
	I0224 13:28:58.562288  956077 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0224 13:28:58.580754  956077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0224 13:28:58.595609  956077 docker.go:217] disabling cri-docker service (if available) ...
	I0224 13:28:58.595684  956077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0224 13:28:58.610441  956077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0224 13:28:58.625512  956077 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0224 13:28:58.742160  956077 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0224 13:28:58.897257  956077 docker.go:233] disabling docker service ...
	I0224 13:28:58.897354  956077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0224 13:28:58.913053  956077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0224 13:28:58.927511  956077 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0224 13:28:59.078303  956077 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0224 13:28:59.190231  956077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0224 13:28:59.205007  956077 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 13:28:59.224899  956077 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0224 13:28:59.224959  956077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 13:28:59.235985  956077 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0224 13:28:59.236076  956077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 13:28:59.247262  956077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 13:28:59.258419  956077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 13:28:59.269559  956077 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0224 13:28:59.281485  956077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 13:28:59.293207  956077 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 13:28:59.312591  956077 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 13:28:59.324339  956077 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0224 13:28:59.334891  956077 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0224 13:28:59.334973  956077 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0224 13:28:59.349831  956077 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0224 13:28:59.360347  956077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 13:28:59.479779  956077 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0224 13:28:59.577405  956077 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0224 13:28:59.577519  956077 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0224 13:28:59.583030  956077 start.go:563] Will wait 60s for crictl version
	I0224 13:28:59.583098  956077 ssh_runner.go:195] Run: which crictl
	I0224 13:28:59.587159  956077 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0224 13:28:59.625913  956077 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0224 13:28:59.626017  956077 ssh_runner.go:195] Run: crio --version
	I0224 13:28:59.656040  956077 ssh_runner.go:195] Run: crio --version
	I0224 13:28:59.690484  956077 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0224 13:28:59.691655  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetIP
	I0224 13:28:59.694827  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:59.695279  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:28:59.695313  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:28:59.695529  956077 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0224 13:28:59.700214  956077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 13:28:59.714858  956077 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0224 13:28:59.716146  956077 kubeadm.go:883] updating cluster {Name:newest-cni-651381 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-6
51381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddr
ess: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0224 13:28:59.716344  956077 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0224 13:28:59.716441  956077 ssh_runner.go:195] Run: sudo crictl images --output json
	I0224 13:28:59.759022  956077 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0224 13:28:59.759106  956077 ssh_runner.go:195] Run: which lz4
	I0224 13:28:59.763641  956077 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0224 13:28:59.768063  956077 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0224 13:28:59.768104  956077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0224 13:29:01.313361  956077 crio.go:462] duration metric: took 1.549763964s to copy over tarball
	I0224 13:29:01.313502  956077 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0224 13:29:03.649181  956077 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.335640797s)
	I0224 13:29:03.649213  956077 crio.go:469] duration metric: took 2.335814633s to extract the tarball
	I0224 13:29:03.649221  956077 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0224 13:29:03.687968  956077 ssh_runner.go:195] Run: sudo crictl images --output json
	I0224 13:29:03.741442  956077 crio.go:514] all images are preloaded for cri-o runtime.
	I0224 13:29:03.741478  956077 cache_images.go:84] Images are preloaded, skipping loading
	I0224 13:29:03.741490  956077 kubeadm.go:934] updating node { 192.168.39.43 8443 v1.32.2 crio true true} ...
	I0224 13:29:03.741662  956077 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-651381 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.43
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:newest-cni-651381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0224 13:29:03.741787  956077 ssh_runner.go:195] Run: crio config
	I0224 13:29:03.799716  956077 cni.go:84] Creating CNI manager for ""
	I0224 13:29:03.799747  956077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0224 13:29:03.799764  956077 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0224 13:29:03.799794  956077 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.43 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-651381 NodeName:newest-cni-651381 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.43"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.43 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0224 13:29:03.799960  956077 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.43
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-651381"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.43"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.43"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0224 13:29:03.800042  956077 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0224 13:29:03.811912  956077 binaries.go:44] Found k8s binaries, skipping transfer
	I0224 13:29:03.812012  956077 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0224 13:29:03.823338  956077 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0224 13:29:03.842685  956077 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0224 13:29:03.861976  956077 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0224 13:29:03.882258  956077 ssh_runner.go:195] Run: grep 192.168.39.43	control-plane.minikube.internal$ /etc/hosts
	I0224 13:29:03.887084  956077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.43	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 13:29:03.902004  956077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 13:29:04.052713  956077 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0224 13:29:04.071828  956077 certs.go:68] Setting up /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/newest-cni-651381 for IP: 192.168.39.43
	I0224 13:29:04.071866  956077 certs.go:194] generating shared ca certs ...
	I0224 13:29:04.071893  956077 certs.go:226] acquiring lock for ca certs: {Name:mk38777c6b180f63d1816020cff79a01106ddf33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 13:29:04.072105  956077 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20451-887294/.minikube/ca.key
	I0224 13:29:04.072202  956077 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20451-887294/.minikube/proxy-client-ca.key
	I0224 13:29:04.072219  956077 certs.go:256] generating profile certs ...
	I0224 13:29:04.072346  956077 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/newest-cni-651381/client.key
	I0224 13:29:04.072430  956077 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/newest-cni-651381/apiserver.key.5ef52652
	I0224 13:29:04.072487  956077 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/newest-cni-651381/proxy-client.key
	I0224 13:29:04.072689  956077 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/894564.pem (1338 bytes)
	W0224 13:29:04.072726  956077 certs.go:480] ignoring /home/jenkins/minikube-integration/20451-887294/.minikube/certs/894564_empty.pem, impossibly tiny 0 bytes
	I0224 13:29:04.072737  956077 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca-key.pem (1679 bytes)
	I0224 13:29:04.072760  956077 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/ca.pem (1082 bytes)
	I0224 13:29:04.072785  956077 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/cert.pem (1123 bytes)
	I0224 13:29:04.072809  956077 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/certs/key.pem (1679 bytes)
	I0224 13:29:04.072844  956077 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-887294/.minikube/files/etc/ssl/certs/8945642.pem (1708 bytes)
	I0224 13:29:04.073566  956077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0224 13:29:04.112077  956077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0224 13:29:04.149068  956077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0224 13:29:04.179616  956077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0224 13:29:04.209417  956077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/newest-cni-651381/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0224 13:29:04.245961  956077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/newest-cni-651381/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0224 13:29:04.279758  956077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/newest-cni-651381/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0224 13:29:04.306976  956077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/newest-cni-651381/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0224 13:29:04.334286  956077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0224 13:29:04.361320  956077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/certs/894564.pem --> /usr/share/ca-certificates/894564.pem (1338 bytes)
	I0224 13:29:04.387966  956077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-887294/.minikube/files/etc/ssl/certs/8945642.pem --> /usr/share/ca-certificates/8945642.pem (1708 bytes)
	I0224 13:29:04.414747  956077 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0224 13:29:04.433921  956077 ssh_runner.go:195] Run: openssl version
	I0224 13:29:04.440667  956077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8945642.pem && ln -fs /usr/share/ca-certificates/8945642.pem /etc/ssl/certs/8945642.pem"
	I0224 13:29:04.453454  956077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8945642.pem
	I0224 13:29:04.459040  956077 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 24 12:09 /usr/share/ca-certificates/8945642.pem
	I0224 13:29:04.459108  956077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8945642.pem
	I0224 13:29:04.466078  956077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8945642.pem /etc/ssl/certs/3ec20f2e.0"
	I0224 13:29:04.478970  956077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0224 13:29:04.491228  956077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0224 13:29:04.496708  956077 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 24 12:01 /usr/share/ca-certificates/minikubeCA.pem
	I0224 13:29:04.496771  956077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0224 13:29:04.503067  956077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0224 13:29:04.515240  956077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/894564.pem && ln -fs /usr/share/ca-certificates/894564.pem /etc/ssl/certs/894564.pem"
	I0224 13:29:04.527524  956077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/894564.pem
	I0224 13:29:04.532779  956077 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 24 12:09 /usr/share/ca-certificates/894564.pem
	I0224 13:29:04.532845  956077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/894564.pem
	I0224 13:29:04.539425  956077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/894564.pem /etc/ssl/certs/51391683.0"
	I0224 13:29:04.551398  956077 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0224 13:29:04.556720  956077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0224 13:29:04.566700  956077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0224 13:29:04.573865  956077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0224 13:29:04.580856  956077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0224 13:29:04.588174  956077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0224 13:29:04.595837  956077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0224 13:29:04.603384  956077 kubeadm.go:392] StartCluster: {Name:newest-cni-651381 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-6513
81 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress
: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0224 13:29:04.603508  956077 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0224 13:29:04.603592  956077 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0224 13:29:04.647022  956077 cri.go:89] found id: ""
	I0224 13:29:04.647118  956077 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0224 13:29:04.658566  956077 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0224 13:29:04.658595  956077 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0224 13:29:04.658664  956077 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0224 13:29:04.669446  956077 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0224 13:29:04.670107  956077 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-651381" does not appear in /home/jenkins/minikube-integration/20451-887294/kubeconfig
	I0224 13:29:04.670340  956077 kubeconfig.go:62] /home/jenkins/minikube-integration/20451-887294/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-651381" cluster setting kubeconfig missing "newest-cni-651381" context setting]
	I0224 13:29:04.670763  956077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-887294/kubeconfig: {Name:mk0122b69f41cd40d5267f436266ccce22ce5ef2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 13:29:04.703477  956077 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0224 13:29:04.714783  956077 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.43
	I0224 13:29:04.714826  956077 kubeadm.go:1160] stopping kube-system containers ...
	I0224 13:29:04.714856  956077 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0224 13:29:04.714926  956077 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0224 13:29:04.753447  956077 cri.go:89] found id: ""
	I0224 13:29:04.753549  956077 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0224 13:29:04.771436  956077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0224 13:29:04.782526  956077 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0224 13:29:04.782550  956077 kubeadm.go:157] found existing configuration files:
	
	I0224 13:29:04.782599  956077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0224 13:29:04.793248  956077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0224 13:29:04.793349  956077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0224 13:29:04.804033  956077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0224 13:29:04.814167  956077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0224 13:29:04.814256  956077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0224 13:29:04.824390  956077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0224 13:29:04.835928  956077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0224 13:29:04.836009  956077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0224 13:29:04.846849  956077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0224 13:29:04.857291  956077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0224 13:29:04.857371  956077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0224 13:29:04.868432  956077 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0224 13:29:04.879429  956077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0224 13:29:05.016556  956077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0224 13:29:05.855312  956077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0224 13:29:06.068970  956077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0224 13:29:06.138545  956077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0224 13:29:06.252222  956077 api_server.go:52] waiting for apiserver process to appear ...
	I0224 13:29:06.252315  956077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:29:06.752623  956077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:29:07.253475  956077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:29:07.273087  956077 api_server.go:72] duration metric: took 1.020861784s to wait for apiserver process to appear ...
	I0224 13:29:07.273129  956077 api_server.go:88] waiting for apiserver healthz status ...
	I0224 13:29:07.273156  956077 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I0224 13:29:07.273777  956077 api_server.go:269] stopped: https://192.168.39.43:8443/healthz: Get "https://192.168.39.43:8443/healthz": dial tcp 192.168.39.43:8443: connect: connection refused
	I0224 13:29:07.773461  956077 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I0224 13:29:10.395720  956077 api_server.go:279] https://192.168.39.43:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0224 13:29:10.395756  956077 api_server.go:103] status: https://192.168.39.43:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0224 13:29:10.395777  956077 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I0224 13:29:10.424020  956077 api_server.go:279] https://192.168.39.43:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0224 13:29:10.424060  956077 api_server.go:103] status: https://192.168.39.43:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0224 13:29:10.773537  956077 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I0224 13:29:10.778715  956077 api_server.go:279] https://192.168.39.43:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0224 13:29:10.778749  956077 api_server.go:103] status: https://192.168.39.43:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0224 13:29:11.273360  956077 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I0224 13:29:11.282850  956077 api_server.go:279] https://192.168.39.43:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0224 13:29:11.282888  956077 api_server.go:103] status: https://192.168.39.43:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0224 13:29:11.773530  956077 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I0224 13:29:11.782399  956077 api_server.go:279] https://192.168.39.43:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0224 13:29:11.782431  956077 api_server.go:103] status: https://192.168.39.43:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0224 13:29:12.274112  956077 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I0224 13:29:12.279760  956077 api_server.go:279] https://192.168.39.43:8443/healthz returned 200:
	ok
	I0224 13:29:12.286489  956077 api_server.go:141] control plane version: v1.32.2
	I0224 13:29:12.286522  956077 api_server.go:131] duration metric: took 5.013385837s to wait for apiserver health ...
	I0224 13:29:12.286533  956077 cni.go:84] Creating CNI manager for ""
	I0224 13:29:12.286540  956077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0224 13:29:12.288455  956077 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0224 13:29:12.289765  956077 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0224 13:29:12.302198  956077 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0224 13:29:12.341287  956077 system_pods.go:43] waiting for kube-system pods to appear ...
	I0224 13:29:12.353152  956077 system_pods.go:59] 8 kube-system pods found
	I0224 13:29:12.353227  956077 system_pods.go:61] "coredns-668d6bf9bc-5fzqg" [081ec828-51bc-43dd-8eb5-50027cd1e5ce] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0224 13:29:12.353242  956077 system_pods.go:61] "etcd-newest-cni-651381" [49ed84ef-a3f9-41e6-969d-9c36df52bd1e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0224 13:29:12.353256  956077 system_pods.go:61] "kube-apiserver-newest-cni-651381" [3fc7c3f3-60dd-4be5-83d3-43fff952ccb0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0224 13:29:12.353266  956077 system_pods.go:61] "kube-controller-manager-newest-cni-651381" [f24e71f1-80e9-408a-b3d9-ad900b5e1955] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0224 13:29:12.353282  956077 system_pods.go:61] "kube-proxy-lh4cg" [024a70db-68c8-4faf-9072-9957034b592a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0224 13:29:12.353292  956077 system_pods.go:61] "kube-scheduler-newest-cni-651381" [9afed0fd-e49a-4d28-9504-1562a04fbb7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0224 13:29:12.353335  956077 system_pods.go:61] "metrics-server-f79f97bbb-zcgjt" [6afaa917-e3b5-4c04-8853-4936ba182e4b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0224 13:29:12.353346  956077 system_pods.go:61] "storage-provisioner" [dd4ee237-b34c-481b-8a9d-ff296eca352b] Running
	I0224 13:29:12.353359  956077 system_pods.go:74] duration metric: took 12.029012ms to wait for pod list to return data ...
	I0224 13:29:12.353373  956077 node_conditions.go:102] verifying NodePressure condition ...
	I0224 13:29:12.364913  956077 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0224 13:29:12.364957  956077 node_conditions.go:123] node cpu capacity is 2
	I0224 13:29:12.364975  956077 node_conditions.go:105] duration metric: took 11.585246ms to run NodePressure ...
	I0224 13:29:12.365016  956077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0224 13:29:12.738521  956077 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0224 13:29:12.751756  956077 ops.go:34] apiserver oom_adj: -16
	I0224 13:29:12.751784  956077 kubeadm.go:597] duration metric: took 8.093182521s to restartPrimaryControlPlane
	I0224 13:29:12.751797  956077 kubeadm.go:394] duration metric: took 8.148429756s to StartCluster
	I0224 13:29:12.751815  956077 settings.go:142] acquiring lock: {Name:mk663e441d32b04abcccdab86db3e15276e74de8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 13:29:12.751904  956077 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20451-887294/kubeconfig
	I0224 13:29:12.752732  956077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-887294/kubeconfig: {Name:mk0122b69f41cd40d5267f436266ccce22ce5ef2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 13:29:12.753015  956077 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0224 13:29:12.753115  956077 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0224 13:29:12.753237  956077 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-651381"
	I0224 13:29:12.753262  956077 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-651381"
	W0224 13:29:12.753270  956077 addons.go:247] addon storage-provisioner should already be in state true
	I0224 13:29:12.753272  956077 addons.go:69] Setting default-storageclass=true in profile "newest-cni-651381"
	I0224 13:29:12.753291  956077 config.go:182] Loaded profile config "newest-cni-651381": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0224 13:29:12.753300  956077 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-651381"
	I0224 13:29:12.753324  956077 host.go:66] Checking if "newest-cni-651381" exists ...
	I0224 13:29:12.753334  956077 addons.go:69] Setting dashboard=true in profile "newest-cni-651381"
	I0224 13:29:12.753345  956077 addons.go:69] Setting metrics-server=true in profile "newest-cni-651381"
	I0224 13:29:12.753365  956077 addons.go:238] Setting addon dashboard=true in "newest-cni-651381"
	I0224 13:29:12.753372  956077 addons.go:238] Setting addon metrics-server=true in "newest-cni-651381"
	W0224 13:29:12.753382  956077 addons.go:247] addon dashboard should already be in state true
	W0224 13:29:12.753389  956077 addons.go:247] addon metrics-server should already be in state true
	I0224 13:29:12.753419  956077 host.go:66] Checking if "newest-cni-651381" exists ...
	I0224 13:29:12.753424  956077 host.go:66] Checking if "newest-cni-651381" exists ...
	I0224 13:29:12.753799  956077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:29:12.753809  956077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:29:12.753844  956077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:29:12.753852  956077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:29:12.753859  956077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:29:12.753877  956077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:29:12.753896  956077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:29:12.753907  956077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:29:12.756327  956077 out.go:177] * Verifying Kubernetes components...
	I0224 13:29:12.757988  956077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 13:29:12.770827  956077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41209
	I0224 13:29:12.771035  956077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34785
	I0224 13:29:12.771532  956077 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:29:12.771609  956077 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:29:12.772161  956077 main.go:141] libmachine: Using API Version  1
	I0224 13:29:12.772186  956077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:29:12.772228  956077 main.go:141] libmachine: Using API Version  1
	I0224 13:29:12.772250  956077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:29:12.772280  956077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46419
	I0224 13:29:12.772345  956077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39593
	I0224 13:29:12.772705  956077 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:29:12.772733  956077 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:29:12.772777  956077 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:29:12.772856  956077 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:29:12.772908  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetState
	I0224 13:29:12.773495  956077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:29:12.773541  956077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:29:12.773925  956077 main.go:141] libmachine: Using API Version  1
	I0224 13:29:12.773937  956077 main.go:141] libmachine: Using API Version  1
	I0224 13:29:12.773948  956077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:29:12.773953  956077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:29:12.774427  956077 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:29:12.774735  956077 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:29:12.775094  956077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:29:12.775132  956077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:29:12.775346  956077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:29:12.775386  956077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:29:12.790773  956077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38083
	I0224 13:29:12.791279  956077 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:29:12.791520  956077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37107
	I0224 13:29:12.791793  956077 main.go:141] libmachine: Using API Version  1
	I0224 13:29:12.791815  956077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:29:12.792028  956077 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:29:12.792228  956077 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:29:12.792458  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetState
	I0224 13:29:12.792693  956077 main.go:141] libmachine: Using API Version  1
	I0224 13:29:12.792728  956077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:29:12.793147  956077 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:29:12.793354  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetState
	I0224 13:29:12.794339  956077 main.go:141] libmachine: (newest-cni-651381) Calling .DriverName
	I0224 13:29:12.795159  956077 main.go:141] libmachine: (newest-cni-651381) Calling .DriverName
	I0224 13:29:12.796980  956077 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0224 13:29:12.797044  956077 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0224 13:29:12.798873  956077 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0224 13:29:12.798897  956077 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0224 13:29:12.798924  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHHostname
	I0224 13:29:12.799025  956077 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0224 13:29:12.800379  956077 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0224 13:29:12.800413  956077 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0224 13:29:12.800444  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHHostname
	I0224 13:29:12.802889  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:29:12.803112  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:29:12.803154  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:29:12.803253  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHPort
	I0224 13:29:12.803514  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:29:12.803684  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHUsername
	I0224 13:29:12.803835  956077 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/newest-cni-651381/id_rsa Username:docker}
	I0224 13:29:12.804218  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:29:12.804781  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:29:12.804865  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:29:12.804986  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHPort
	I0224 13:29:12.805169  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:29:12.805331  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHUsername
	I0224 13:29:12.805504  956077 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/newest-cni-651381/id_rsa Username:docker}
	I0224 13:29:12.805863  956077 addons.go:238] Setting addon default-storageclass=true in "newest-cni-651381"
	W0224 13:29:12.805886  956077 addons.go:247] addon default-storageclass should already be in state true
	I0224 13:29:12.805921  956077 host.go:66] Checking if "newest-cni-651381" exists ...
	I0224 13:29:12.806263  956077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:29:12.806310  956077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:29:12.822073  956077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36359
	I0224 13:29:12.822078  956077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33077
	I0224 13:29:12.822532  956077 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:29:12.822608  956077 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:29:12.823097  956077 main.go:141] libmachine: Using API Version  1
	I0224 13:29:12.823120  956077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:29:12.823190  956077 main.go:141] libmachine: Using API Version  1
	I0224 13:29:12.823208  956077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:29:12.823472  956077 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:29:12.823587  956077 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:29:12.823766  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetState
	I0224 13:29:12.824054  956077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 13:29:12.824092  956077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 13:29:12.825722  956077 main.go:141] libmachine: (newest-cni-651381) Calling .DriverName
	I0224 13:29:12.827968  956077 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0224 13:29:12.829697  956077 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0224 13:29:12.829721  956077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0224 13:29:12.829743  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHHostname
	I0224 13:29:12.833829  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:29:12.834243  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:29:12.834272  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:29:12.834576  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHPort
	I0224 13:29:12.834868  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:29:12.835030  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHUsername
	I0224 13:29:12.835176  956077 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/newest-cni-651381/id_rsa Username:docker}
	I0224 13:29:12.841346  956077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45733
	I0224 13:29:12.841788  956077 main.go:141] libmachine: () Calling .GetVersion
	I0224 13:29:12.842314  956077 main.go:141] libmachine: Using API Version  1
	I0224 13:29:12.842345  956077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 13:29:12.842757  956077 main.go:141] libmachine: () Calling .GetMachineName
	I0224 13:29:12.842974  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetState
	I0224 13:29:12.844679  956077 main.go:141] libmachine: (newest-cni-651381) Calling .DriverName
	I0224 13:29:12.844903  956077 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0224 13:29:12.844923  956077 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0224 13:29:12.844944  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHHostname
	I0224 13:29:12.847773  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:29:12.848236  956077 main.go:141] libmachine: (newest-cni-651381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:98:b8", ip: ""} in network mk-newest-cni-651381: {Iface:virbr4 ExpiryTime:2025-02-24 14:27:54 +0000 UTC Type:0 Mac:52:54:00:1b:98:b8 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:newest-cni-651381 Clientid:01:52:54:00:1b:98:b8}
	I0224 13:29:12.848274  956077 main.go:141] libmachine: (newest-cni-651381) DBG | domain newest-cni-651381 has defined IP address 192.168.39.43 and MAC address 52:54:00:1b:98:b8 in network mk-newest-cni-651381
	I0224 13:29:12.848424  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHPort
	I0224 13:29:12.848652  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHKeyPath
	I0224 13:29:12.848819  956077 main.go:141] libmachine: (newest-cni-651381) Calling .GetSSHUsername
	I0224 13:29:12.848952  956077 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/newest-cni-651381/id_rsa Username:docker}
	I0224 13:29:12.994330  956077 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0224 13:29:13.013328  956077 api_server.go:52] waiting for apiserver process to appear ...
	I0224 13:29:13.013419  956077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:29:13.031907  956077 api_server.go:72] duration metric: took 278.851886ms to wait for apiserver process to appear ...
	I0224 13:29:13.031946  956077 api_server.go:88] waiting for apiserver healthz status ...
	I0224 13:29:13.031974  956077 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I0224 13:29:13.037741  956077 api_server.go:279] https://192.168.39.43:8443/healthz returned 200:
	ok
	I0224 13:29:13.038717  956077 api_server.go:141] control plane version: v1.32.2
	I0224 13:29:13.038740  956077 api_server.go:131] duration metric: took 6.786687ms to wait for apiserver health ...
	I0224 13:29:13.038749  956077 system_pods.go:43] waiting for kube-system pods to appear ...
	I0224 13:29:13.041638  956077 system_pods.go:59] 8 kube-system pods found
	I0224 13:29:13.041677  956077 system_pods.go:61] "coredns-668d6bf9bc-5fzqg" [081ec828-51bc-43dd-8eb5-50027cd1e5ce] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0224 13:29:13.041689  956077 system_pods.go:61] "etcd-newest-cni-651381" [49ed84ef-a3f9-41e6-969d-9c36df52bd1e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0224 13:29:13.041699  956077 system_pods.go:61] "kube-apiserver-newest-cni-651381" [3fc7c3f3-60dd-4be5-83d3-43fff952ccb0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0224 13:29:13.041707  956077 system_pods.go:61] "kube-controller-manager-newest-cni-651381" [f24e71f1-80e9-408a-b3d9-ad900b5e1955] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0224 13:29:13.041713  956077 system_pods.go:61] "kube-proxy-lh4cg" [024a70db-68c8-4faf-9072-9957034b592a] Running
	I0224 13:29:13.041723  956077 system_pods.go:61] "kube-scheduler-newest-cni-651381" [9afed0fd-e49a-4d28-9504-1562a04fbb7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0224 13:29:13.041734  956077 system_pods.go:61] "metrics-server-f79f97bbb-zcgjt" [6afaa917-e3b5-4c04-8853-4936ba182e4b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0224 13:29:13.041744  956077 system_pods.go:61] "storage-provisioner" [dd4ee237-b34c-481b-8a9d-ff296eca352b] Running
	I0224 13:29:13.041755  956077 system_pods.go:74] duration metric: took 2.998451ms to wait for pod list to return data ...
	I0224 13:29:13.041769  956077 default_sa.go:34] waiting for default service account to be created ...
	I0224 13:29:13.045370  956077 default_sa.go:45] found service account: "default"
	I0224 13:29:13.045406  956077 default_sa.go:55] duration metric: took 3.628344ms for default service account to be created ...
	I0224 13:29:13.045423  956077 kubeadm.go:582] duration metric: took 292.373047ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0224 13:29:13.045461  956077 node_conditions.go:102] verifying NodePressure condition ...
	I0224 13:29:13.048412  956077 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0224 13:29:13.048450  956077 node_conditions.go:123] node cpu capacity is 2
	I0224 13:29:13.048465  956077 node_conditions.go:105] duration metric: took 2.99453ms to run NodePressure ...
	I0224 13:29:13.048482  956077 start.go:241] waiting for startup goroutines ...
	I0224 13:29:13.107171  956077 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0224 13:29:13.107201  956077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0224 13:29:13.119071  956077 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0224 13:29:13.119103  956077 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0224 13:29:13.134996  956077 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0224 13:29:13.135034  956077 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0224 13:29:13.155551  956077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0224 13:29:13.185957  956077 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0224 13:29:13.185995  956077 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0224 13:29:13.186048  956077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0224 13:29:13.188044  956077 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0224 13:29:13.188069  956077 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0224 13:29:13.231557  956077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0224 13:29:13.247560  956077 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0224 13:29:13.247593  956077 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0224 13:29:13.353680  956077 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0224 13:29:13.353706  956077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0224 13:29:13.453436  956077 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0224 13:29:13.453467  956077 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0224 13:29:13.612651  956077 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0224 13:29:13.612689  956077 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0224 13:29:13.761435  956077 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0224 13:29:13.761484  956077 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0224 13:29:11.224324  953268 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0224 13:29:11.225286  953268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:29:11.225572  953268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:29:13.875252  956077 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0224 13:29:13.875291  956077 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0224 13:29:13.988211  956077 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0224 13:29:13.988245  956077 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0224 13:29:14.040504  956077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0224 13:29:14.735719  956077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.580126907s)
	I0224 13:29:14.735772  956077 main.go:141] libmachine: Making call to close driver server
	I0224 13:29:14.735781  956077 main.go:141] libmachine: (newest-cni-651381) Calling .Close
	I0224 13:29:14.735890  956077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.549811056s)
	I0224 13:29:14.735948  956077 main.go:141] libmachine: Making call to close driver server
	I0224 13:29:14.735960  956077 main.go:141] libmachine: (newest-cni-651381) Calling .Close
	I0224 13:29:14.736196  956077 main.go:141] libmachine: (newest-cni-651381) DBG | Closing plugin on server side
	I0224 13:29:14.736226  956077 main.go:141] libmachine: Successfully made call to close driver server
	I0224 13:29:14.736242  956077 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 13:29:14.736258  956077 main.go:141] libmachine: Making call to close driver server
	I0224 13:29:14.736272  956077 main.go:141] libmachine: (newest-cni-651381) Calling .Close
	I0224 13:29:14.736296  956077 main.go:141] libmachine: Successfully made call to close driver server
	I0224 13:29:14.736311  956077 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 13:29:14.736321  956077 main.go:141] libmachine: Making call to close driver server
	I0224 13:29:14.736344  956077 main.go:141] libmachine: (newest-cni-651381) Calling .Close
	I0224 13:29:14.736595  956077 main.go:141] libmachine: Successfully made call to close driver server
	I0224 13:29:14.736611  956077 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 13:29:14.736658  956077 main.go:141] libmachine: (newest-cni-651381) DBG | Closing plugin on server side
	I0224 13:29:14.736872  956077 main.go:141] libmachine: Successfully made call to close driver server
	I0224 13:29:14.736892  956077 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 13:29:14.745116  956077 main.go:141] libmachine: Making call to close driver server
	I0224 13:29:14.745148  956077 main.go:141] libmachine: (newest-cni-651381) Calling .Close
	I0224 13:29:14.745492  956077 main.go:141] libmachine: Successfully made call to close driver server
	I0224 13:29:14.745517  956077 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 13:29:14.745526  956077 main.go:141] libmachine: (newest-cni-651381) DBG | Closing plugin on server side
	I0224 13:29:14.894767  956077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.663157775s)
	I0224 13:29:14.894851  956077 main.go:141] libmachine: Making call to close driver server
	I0224 13:29:14.894872  956077 main.go:141] libmachine: (newest-cni-651381) Calling .Close
	I0224 13:29:14.895200  956077 main.go:141] libmachine: Successfully made call to close driver server
	I0224 13:29:14.895223  956077 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 13:29:14.895234  956077 main.go:141] libmachine: Making call to close driver server
	I0224 13:29:14.895241  956077 main.go:141] libmachine: (newest-cni-651381) Calling .Close
	I0224 13:29:14.895512  956077 main.go:141] libmachine: Successfully made call to close driver server
	I0224 13:29:14.895531  956077 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 13:29:14.895543  956077 addons.go:479] Verifying addon metrics-server=true in "newest-cni-651381"
	I0224 13:29:15.529417  956077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.48881961s)
	I0224 13:29:15.529510  956077 main.go:141] libmachine: Making call to close driver server
	I0224 13:29:15.529526  956077 main.go:141] libmachine: (newest-cni-651381) Calling .Close
	I0224 13:29:15.529885  956077 main.go:141] libmachine: (newest-cni-651381) DBG | Closing plugin on server side
	I0224 13:29:15.529896  956077 main.go:141] libmachine: Successfully made call to close driver server
	I0224 13:29:15.529910  956077 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 13:29:15.529921  956077 main.go:141] libmachine: Making call to close driver server
	I0224 13:29:15.529930  956077 main.go:141] libmachine: (newest-cni-651381) Calling .Close
	I0224 13:29:15.530216  956077 main.go:141] libmachine: Successfully made call to close driver server
	I0224 13:29:15.530235  956077 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 13:29:15.532337  956077 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-651381 addons enable metrics-server
	
	I0224 13:29:15.534011  956077 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0224 13:29:15.535543  956077 addons.go:514] duration metric: took 2.78244386s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0224 13:29:15.535586  956077 start.go:246] waiting for cluster config update ...
	I0224 13:29:15.535599  956077 start.go:255] writing updated cluster config ...
	I0224 13:29:15.535868  956077 ssh_runner.go:195] Run: rm -f paused
	I0224 13:29:15.604806  956077 start.go:600] kubectl: 1.32.2, cluster: 1.32.2 (minor skew: 0)
	I0224 13:29:15.606756  956077 out.go:177] * Done! kubectl is now configured to use "newest-cni-651381" cluster and "default" namespace by default
	I0224 13:29:16.226144  953268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:29:16.226358  953268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:29:26.227187  953268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:29:26.227476  953268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:29:46.228012  953268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:29:46.228297  953268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:30:26.229952  953268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:30:26.230229  953268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:30:26.230260  953268 kubeadm.go:310] 
	I0224 13:30:26.230300  953268 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0224 13:30:26.230364  953268 kubeadm.go:310] 		timed out waiting for the condition
	I0224 13:30:26.230392  953268 kubeadm.go:310] 
	I0224 13:30:26.230441  953268 kubeadm.go:310] 	This error is likely caused by:
	I0224 13:30:26.230505  953268 kubeadm.go:310] 		- The kubelet is not running
	I0224 13:30:26.230648  953268 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0224 13:30:26.230661  953268 kubeadm.go:310] 
	I0224 13:30:26.230806  953268 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0224 13:30:26.230857  953268 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0224 13:30:26.230902  953268 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0224 13:30:26.230911  953268 kubeadm.go:310] 
	I0224 13:30:26.231038  953268 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0224 13:30:26.231147  953268 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0224 13:30:26.231163  953268 kubeadm.go:310] 
	I0224 13:30:26.231301  953268 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0224 13:30:26.231435  953268 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0224 13:30:26.231545  953268 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0224 13:30:26.231657  953268 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0224 13:30:26.231675  953268 kubeadm.go:310] 
	I0224 13:30:26.232473  953268 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0224 13:30:26.232591  953268 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0224 13:30:26.232710  953268 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0224 13:30:26.232936  953268 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0224 13:30:26.232991  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0224 13:30:26.704666  953268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 13:30:26.720451  953268 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0224 13:30:26.732280  953268 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0224 13:30:26.732306  953268 kubeadm.go:157] found existing configuration files:
	
	I0224 13:30:26.732371  953268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0224 13:30:26.743971  953268 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0224 13:30:26.744050  953268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0224 13:30:26.755216  953268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0224 13:30:26.766460  953268 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0224 13:30:26.766542  953268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0224 13:30:26.778117  953268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0224 13:30:26.789142  953268 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0224 13:30:26.789208  953268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0224 13:30:26.800621  953268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0224 13:30:26.811672  953268 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0224 13:30:26.811755  953268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0224 13:30:26.823061  953268 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0224 13:30:27.039614  953268 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0224 13:32:23.115672  953268 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0224 13:32:23.115858  953268 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0224 13:32:23.117520  953268 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0224 13:32:23.117626  953268 kubeadm.go:310] [preflight] Running pre-flight checks
	I0224 13:32:23.117831  953268 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0224 13:32:23.118008  953268 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0224 13:32:23.118171  953268 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0224 13:32:23.118281  953268 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0224 13:32:23.120434  953268 out.go:235]   - Generating certificates and keys ...
	I0224 13:32:23.120529  953268 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0224 13:32:23.120621  953268 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0224 13:32:23.120736  953268 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0224 13:32:23.120819  953268 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0224 13:32:23.120905  953268 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0224 13:32:23.120957  953268 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0224 13:32:23.121011  953268 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0224 13:32:23.121066  953268 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0224 13:32:23.121134  953268 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0224 13:32:23.121202  953268 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0224 13:32:23.121237  953268 kubeadm.go:310] [certs] Using the existing "sa" key
	I0224 13:32:23.121355  953268 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0224 13:32:23.121422  953268 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0224 13:32:23.121526  953268 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0224 13:32:23.121602  953268 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0224 13:32:23.121654  953268 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0224 13:32:23.121775  953268 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0224 13:32:23.121914  953268 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0224 13:32:23.121964  953268 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0224 13:32:23.122028  953268 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0224 13:32:23.123732  953268 out.go:235]   - Booting up control plane ...
	I0224 13:32:23.123835  953268 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0224 13:32:23.123904  953268 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0224 13:32:23.123986  953268 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0224 13:32:23.124096  953268 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0224 13:32:23.124279  953268 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0224 13:32:23.124332  953268 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0224 13:32:23.124401  953268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:32:23.124595  953268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:32:23.124691  953268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:32:23.124893  953268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:32:23.124960  953268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:32:23.125150  953268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:32:23.125220  953268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:32:23.125409  953268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:32:23.125508  953268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 13:32:23.125791  953268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 13:32:23.125817  953268 kubeadm.go:310] 
	I0224 13:32:23.125871  953268 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0224 13:32:23.125925  953268 kubeadm.go:310] 		timed out waiting for the condition
	I0224 13:32:23.125935  953268 kubeadm.go:310] 
	I0224 13:32:23.125985  953268 kubeadm.go:310] 	This error is likely caused by:
	I0224 13:32:23.126040  953268 kubeadm.go:310] 		- The kubelet is not running
	I0224 13:32:23.126194  953268 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0224 13:32:23.126222  953268 kubeadm.go:310] 
	I0224 13:32:23.126328  953268 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0224 13:32:23.126364  953268 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0224 13:32:23.126411  953268 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0224 13:32:23.126421  953268 kubeadm.go:310] 
	I0224 13:32:23.126543  953268 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0224 13:32:23.126655  953268 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0224 13:32:23.126665  953268 kubeadm.go:310] 
	I0224 13:32:23.126777  953268 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0224 13:32:23.126856  953268 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0224 13:32:23.126925  953268 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0224 13:32:23.127003  953268 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0224 13:32:23.127087  953268 kubeadm.go:310] 
	I0224 13:32:23.127095  953268 kubeadm.go:394] duration metric: took 7m58.850238597s to StartCluster
	I0224 13:32:23.127168  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0224 13:32:23.127245  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0224 13:32:23.173206  953268 cri.go:89] found id: ""
	I0224 13:32:23.173252  953268 logs.go:282] 0 containers: []
	W0224 13:32:23.173265  953268 logs.go:284] No container was found matching "kube-apiserver"
	I0224 13:32:23.173274  953268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0224 13:32:23.173355  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0224 13:32:23.220974  953268 cri.go:89] found id: ""
	I0224 13:32:23.221008  953268 logs.go:282] 0 containers: []
	W0224 13:32:23.221017  953268 logs.go:284] No container was found matching "etcd"
	I0224 13:32:23.221024  953268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0224 13:32:23.221095  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0224 13:32:23.256282  953268 cri.go:89] found id: ""
	I0224 13:32:23.256316  953268 logs.go:282] 0 containers: []
	W0224 13:32:23.256327  953268 logs.go:284] No container was found matching "coredns"
	I0224 13:32:23.256335  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0224 13:32:23.256423  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0224 13:32:23.292296  953268 cri.go:89] found id: ""
	I0224 13:32:23.292329  953268 logs.go:282] 0 containers: []
	W0224 13:32:23.292340  953268 logs.go:284] No container was found matching "kube-scheduler"
	I0224 13:32:23.292355  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0224 13:32:23.292422  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0224 13:32:23.328368  953268 cri.go:89] found id: ""
	I0224 13:32:23.328399  953268 logs.go:282] 0 containers: []
	W0224 13:32:23.328408  953268 logs.go:284] No container was found matching "kube-proxy"
	I0224 13:32:23.328414  953268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0224 13:32:23.328488  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0224 13:32:23.380963  953268 cri.go:89] found id: ""
	I0224 13:32:23.380995  953268 logs.go:282] 0 containers: []
	W0224 13:32:23.381005  953268 logs.go:284] No container was found matching "kube-controller-manager"
	I0224 13:32:23.381014  953268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0224 13:32:23.381083  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0224 13:32:23.448170  953268 cri.go:89] found id: ""
	I0224 13:32:23.448206  953268 logs.go:282] 0 containers: []
	W0224 13:32:23.448219  953268 logs.go:284] No container was found matching "kindnet"
	I0224 13:32:23.448227  953268 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0224 13:32:23.448301  953268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0224 13:32:23.494938  953268 cri.go:89] found id: ""
	I0224 13:32:23.494969  953268 logs.go:282] 0 containers: []
	W0224 13:32:23.494978  953268 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0224 13:32:23.494989  953268 logs.go:123] Gathering logs for kubelet ...
	I0224 13:32:23.495004  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 13:32:23.545770  953268 logs.go:123] Gathering logs for dmesg ...
	I0224 13:32:23.545817  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 13:32:23.561559  953268 logs.go:123] Gathering logs for describe nodes ...
	I0224 13:32:23.561608  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 13:32:23.639942  953268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 13:32:23.639969  953268 logs.go:123] Gathering logs for CRI-O ...
	I0224 13:32:23.639983  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0224 13:32:23.748671  953268 logs.go:123] Gathering logs for container status ...
	I0224 13:32:23.748715  953268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0224 13:32:23.790465  953268 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0224 13:32:23.790543  953268 out.go:270] * 
	W0224 13:32:23.790632  953268 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0224 13:32:23.790650  953268 out.go:270] * 
	W0224 13:32:23.791585  953268 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0224 13:32:23.796216  953268 out.go:201] 
	W0224 13:32:23.797430  953268 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0224 13:32:23.797505  953268 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0224 13:32:23.797547  953268 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0224 13:32:23.799102  953268 out.go:201] 
	
	
	==> CRI-O <==
	Feb 24 13:47:16 old-k8s-version-233759 crio[626]: time="2025-02-24 13:47:16.577432658Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1740404836577399041,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5aaefbf6-c22d-4b29-8f51-5f9f2f934e2c name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 13:47:16 old-k8s-version-233759 crio[626]: time="2025-02-24 13:47:16.578313565Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5e67d28a-68ab-4353-867f-b7554af8cd3c name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:47:16 old-k8s-version-233759 crio[626]: time="2025-02-24 13:47:16.578385524Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5e67d28a-68ab-4353-867f-b7554af8cd3c name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:47:16 old-k8s-version-233759 crio[626]: time="2025-02-24 13:47:16.578418863Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5e67d28a-68ab-4353-867f-b7554af8cd3c name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:47:16 old-k8s-version-233759 crio[626]: time="2025-02-24 13:47:16.613387318Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0b50a7ad-cfa6-4f57-b42f-68a243b369a8 name=/runtime.v1.RuntimeService/Version
	Feb 24 13:47:16 old-k8s-version-233759 crio[626]: time="2025-02-24 13:47:16.613488931Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0b50a7ad-cfa6-4f57-b42f-68a243b369a8 name=/runtime.v1.RuntimeService/Version
	Feb 24 13:47:16 old-k8s-version-233759 crio[626]: time="2025-02-24 13:47:16.615396375Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f627dee5-2f52-4828-93bc-a40e65607ac6 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 13:47:16 old-k8s-version-233759 crio[626]: time="2025-02-24 13:47:16.615868280Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1740404836615846871,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f627dee5-2f52-4828-93bc-a40e65607ac6 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 13:47:16 old-k8s-version-233759 crio[626]: time="2025-02-24 13:47:16.616429890Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ac5ea040-082e-43ad-94f4-57d066d974c4 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:47:16 old-k8s-version-233759 crio[626]: time="2025-02-24 13:47:16.616505906Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ac5ea040-082e-43ad-94f4-57d066d974c4 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:47:16 old-k8s-version-233759 crio[626]: time="2025-02-24 13:47:16.616539230Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ac5ea040-082e-43ad-94f4-57d066d974c4 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:47:16 old-k8s-version-233759 crio[626]: time="2025-02-24 13:47:16.650659999Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=13067fd6-b96d-4f72-b9e3-574c7b0ccb72 name=/runtime.v1.RuntimeService/Version
	Feb 24 13:47:16 old-k8s-version-233759 crio[626]: time="2025-02-24 13:47:16.650803645Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=13067fd6-b96d-4f72-b9e3-574c7b0ccb72 name=/runtime.v1.RuntimeService/Version
	Feb 24 13:47:16 old-k8s-version-233759 crio[626]: time="2025-02-24 13:47:16.652074894Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5182e46c-638d-480e-9ff2-f8efe032d1ac name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 13:47:16 old-k8s-version-233759 crio[626]: time="2025-02-24 13:47:16.652483156Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1740404836652455839,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5182e46c-638d-480e-9ff2-f8efe032d1ac name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 13:47:16 old-k8s-version-233759 crio[626]: time="2025-02-24 13:47:16.653227784Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=58d4bb49-f0ec-4319-adee-5be527325099 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:47:16 old-k8s-version-233759 crio[626]: time="2025-02-24 13:47:16.653296318Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=58d4bb49-f0ec-4319-adee-5be527325099 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:47:16 old-k8s-version-233759 crio[626]: time="2025-02-24 13:47:16.653343383Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=58d4bb49-f0ec-4319-adee-5be527325099 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:47:16 old-k8s-version-233759 crio[626]: time="2025-02-24 13:47:16.691592984Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1e2cf689-b2f0-4eb6-bdef-ac4e2931c854 name=/runtime.v1.RuntimeService/Version
	Feb 24 13:47:16 old-k8s-version-233759 crio[626]: time="2025-02-24 13:47:16.691688353Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1e2cf689-b2f0-4eb6-bdef-ac4e2931c854 name=/runtime.v1.RuntimeService/Version
	Feb 24 13:47:16 old-k8s-version-233759 crio[626]: time="2025-02-24 13:47:16.692949607Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=11553141-7df4-4d2d-8517-d53f17a9c643 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 13:47:16 old-k8s-version-233759 crio[626]: time="2025-02-24 13:47:16.693403248Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1740404836693383600,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=11553141-7df4-4d2d-8517-d53f17a9c643 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 24 13:47:16 old-k8s-version-233759 crio[626]: time="2025-02-24 13:47:16.694190581Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=adff5604-3c2f-4f1c-8251-d380aca378a7 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:47:16 old-k8s-version-233759 crio[626]: time="2025-02-24 13:47:16.694256574Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=adff5604-3c2f-4f1c-8251-d380aca378a7 name=/runtime.v1.RuntimeService/ListContainers
	Feb 24 13:47:16 old-k8s-version-233759 crio[626]: time="2025-02-24 13:47:16.694296155Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=adff5604-3c2f-4f1c-8251-d380aca378a7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Feb24 13:23] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054709] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042708] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Feb24 13:24] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.133792] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.074673] overlayfs: failed to resolve '/var/lib/containers/storage/overlay/opaque-bug-check3889635992/l1': -2
	[  +0.613262] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.739778] systemd-fstab-generator[553]: Ignoring "noauto" option for root device
	[  +0.062960] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072258] systemd-fstab-generator[565]: Ignoring "noauto" option for root device
	[  +0.214347] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.136588] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.281511] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +7.250646] systemd-fstab-generator[883]: Ignoring "noauto" option for root device
	[  +0.068712] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.282155] systemd-fstab-generator[1007]: Ignoring "noauto" option for root device
	[ +12.331271] kauditd_printk_skb: 46 callbacks suppressed
	[Feb24 13:28] systemd-fstab-generator[4979]: Ignoring "noauto" option for root device
	[Feb24 13:30] systemd-fstab-generator[5261]: Ignoring "noauto" option for root device
	[  +0.064430] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 13:47:16 up 23 min,  0 users,  load average: 0.02, 0.04, 0.02
	Linux old-k8s-version-233759 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Feb 24 13:47:14 old-k8s-version-233759 kubelet[7080]:         /usr/local/go/src/net/tcpsock_posix.go:61 +0xd7
	Feb 24 13:47:14 old-k8s-version-233759 kubelet[7080]: net.(*sysDialer).dialSingle(0xc000ac9900, 0x4f7fe40, 0xc000bd9140, 0x4f1ff00, 0xc000d28330, 0x0, 0x0, 0x0, 0x0)
	Feb 24 13:47:14 old-k8s-version-233759 kubelet[7080]:         /usr/local/go/src/net/dial.go:580 +0x5e5
	Feb 24 13:47:14 old-k8s-version-233759 kubelet[7080]: net.(*sysDialer).dialSerial(0xc000ac9900, 0x4f7fe40, 0xc000bd9140, 0xc000b479f0, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0)
	Feb 24 13:47:14 old-k8s-version-233759 kubelet[7080]:         /usr/local/go/src/net/dial.go:548 +0x152
	Feb 24 13:47:14 old-k8s-version-233759 kubelet[7080]: net.(*Dialer).DialContext(0xc000acbe60, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000b952f0, 0x24, 0x0, 0x0, 0x0, ...)
	Feb 24 13:47:14 old-k8s-version-233759 kubelet[7080]:         /usr/local/go/src/net/dial.go:425 +0x6e5
	Feb 24 13:47:14 old-k8s-version-233759 kubelet[7080]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000af4b40, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000b952f0, 0x24, 0x60, 0x7fcb1829c888, 0x118, ...)
	Feb 24 13:47:14 old-k8s-version-233759 kubelet[7080]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Feb 24 13:47:14 old-k8s-version-233759 kubelet[7080]: net/http.(*Transport).dial(0xc000a48dc0, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000b952f0, 0x24, 0x0, 0x0, 0x0, ...)
	Feb 24 13:47:14 old-k8s-version-233759 kubelet[7080]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Feb 24 13:47:14 old-k8s-version-233759 kubelet[7080]: net/http.(*Transport).dialConn(0xc000a48dc0, 0x4f7fe00, 0xc000052030, 0x0, 0xc000b3d380, 0x5, 0xc000b952f0, 0x24, 0x0, 0xc000be6360, ...)
	Feb 24 13:47:14 old-k8s-version-233759 kubelet[7080]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Feb 24 13:47:14 old-k8s-version-233759 kubelet[7080]: net/http.(*Transport).dialConnFor(0xc000a48dc0, 0xc000b502c0)
	Feb 24 13:47:14 old-k8s-version-233759 kubelet[7080]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Feb 24 13:47:14 old-k8s-version-233759 kubelet[7080]: created by net/http.(*Transport).queueForDial
	Feb 24 13:47:14 old-k8s-version-233759 kubelet[7080]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Feb 24 13:47:15 old-k8s-version-233759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 175.
	Feb 24 13:47:15 old-k8s-version-233759 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 24 13:47:15 old-k8s-version-233759 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 24 13:47:15 old-k8s-version-233759 kubelet[7088]: I0224 13:47:15.676175    7088 server.go:416] Version: v1.20.0
	Feb 24 13:47:15 old-k8s-version-233759 kubelet[7088]: I0224 13:47:15.676592    7088 server.go:837] Client rotation is on, will bootstrap in background
	Feb 24 13:47:15 old-k8s-version-233759 kubelet[7088]: I0224 13:47:15.678684    7088 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 24 13:47:15 old-k8s-version-233759 kubelet[7088]: I0224 13:47:15.679733    7088 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Feb 24 13:47:15 old-k8s-version-233759 kubelet[7088]: W0224 13:47:15.679953    7088 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-233759 -n old-k8s-version-233759
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-233759 -n old-k8s-version-233759: exit status 2 (236.417848ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-233759" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (350.22s)

                                                
                                    

Test pass (271/321)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 26.71
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.15
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.32.2/json-events 15.04
13 TestDownloadOnly/v1.32.2/preload-exists 0
17 TestDownloadOnly/v1.32.2/LogsDuration 0.07
18 TestDownloadOnly/v1.32.2/DeleteAll 0.15
19 TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.64
22 TestOffline 88.18
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 206.07
31 TestAddons/serial/GCPAuth/Namespaces 0.14
32 TestAddons/serial/GCPAuth/FakeCredentials 9.56
35 TestAddons/parallel/Registry 16.32
37 TestAddons/parallel/InspektorGadget 12.23
38 TestAddons/parallel/MetricsServer 5.84
40 TestAddons/parallel/CSI 170.64
41 TestAddons/parallel/Headlamp 141.5
42 TestAddons/parallel/CloudSpanner 5.83
43 TestAddons/parallel/LocalPath 146.2
44 TestAddons/parallel/NvidiaDevicePlugin 6.54
45 TestAddons/parallel/Yakd 11.83
47 TestAddons/StoppedEnableDisable 91.31
48 TestCertOptions 89
49 TestCertExpiration 279.47
51 TestForceSystemdFlag 54.45
52 TestForceSystemdEnv 48.8
54 TestKVMDriverInstallOrUpdate 4.76
58 TestErrorSpam/setup 43.2
59 TestErrorSpam/start 0.39
60 TestErrorSpam/status 0.77
61 TestErrorSpam/pause 1.67
62 TestErrorSpam/unpause 1.74
63 TestErrorSpam/stop 5.31
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 84.53
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 34.78
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.14
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.3
75 TestFunctional/serial/CacheCmd/cache/add_local 2.3
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.73
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 276.35
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.64
86 TestFunctional/serial/LogsFileCmd 1.62
87 TestFunctional/serial/InvalidService 4.27
89 TestFunctional/parallel/ConfigCmd 0.39
90 TestFunctional/parallel/DashboardCmd 15.41
91 TestFunctional/parallel/DryRun 0.53
92 TestFunctional/parallel/InternationalLanguage 0.15
93 TestFunctional/parallel/StatusCmd 0.84
97 TestFunctional/parallel/ServiceCmdConnect 12.26
98 TestFunctional/parallel/AddonsCmd 0.18
99 TestFunctional/parallel/PersistentVolumeClaim 48.74
101 TestFunctional/parallel/SSHCmd 0.45
102 TestFunctional/parallel/CpCmd 1.42
103 TestFunctional/parallel/MySQL 27.38
104 TestFunctional/parallel/FileSync 0.29
105 TestFunctional/parallel/CertSync 1.51
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.47
113 TestFunctional/parallel/License 0.74
114 TestFunctional/parallel/Version/short 0.06
115 TestFunctional/parallel/Version/components 0.6
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.43
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.68
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
120 TestFunctional/parallel/ImageCommands/ImageBuild 7.73
121 TestFunctional/parallel/ImageCommands/Setup 1.94
122 TestFunctional/parallel/ServiceCmd/DeployApp 12.18
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
135 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.91
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.94
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.2
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.85
139 TestFunctional/parallel/ImageCommands/ImageRemove 2.25
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 3.68
141 TestFunctional/parallel/ServiceCmd/List 0.34
142 TestFunctional/parallel/ServiceCmd/JSONOutput 0.38
143 TestFunctional/parallel/ServiceCmd/HTTPS 0.34
144 TestFunctional/parallel/ServiceCmd/Format 0.32
145 TestFunctional/parallel/ServiceCmd/URL 0.38
146 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 4.4
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
148 TestFunctional/parallel/ProfileCmd/profile_list 0.54
149 TestFunctional/parallel/ProfileCmd/profile_json_output 0.48
150 TestFunctional/parallel/MountCmd/any-port 11.71
151 TestFunctional/parallel/MountCmd/specific-port 1.9
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.75
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 200.5
161 TestMultiControlPlane/serial/DeployApp 7.26
162 TestMultiControlPlane/serial/PingHostFromPods 1.26
163 TestMultiControlPlane/serial/AddWorkerNode 60.04
164 TestMultiControlPlane/serial/NodeLabels 0.07
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.92
166 TestMultiControlPlane/serial/CopyFile 13.72
167 TestMultiControlPlane/serial/StopSecondaryNode 91.53
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.69
169 TestMultiControlPlane/serial/RestartSecondaryNode 55.81
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.91
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 443.5
172 TestMultiControlPlane/serial/DeleteSecondaryNode 18.56
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.65
174 TestMultiControlPlane/serial/StopCluster 272.77
175 TestMultiControlPlane/serial/RestartCluster 124.63
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.66
177 TestMultiControlPlane/serial/AddSecondaryNode 79.44
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.93
182 TestJSONOutput/start/Command 60.41
183 TestJSONOutput/start/Audit 0
185 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/pause/Command 0.75
189 TestJSONOutput/pause/Audit 0
191 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/unpause/Command 0.65
195 TestJSONOutput/unpause/Audit 0
197 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/stop/Command 7.41
201 TestJSONOutput/stop/Audit 0
203 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
205 TestErrorJSONOutput 0.22
210 TestMainNoArgs 0.05
211 TestMinikubeProfile 89.67
214 TestMountStart/serial/StartWithMountFirst 25.87
215 TestMountStart/serial/VerifyMountFirst 0.4
216 TestMountStart/serial/StartWithMountSecond 27.53
217 TestMountStart/serial/VerifyMountSecond 0.39
218 TestMountStart/serial/DeleteFirst 0.91
219 TestMountStart/serial/VerifyMountPostDelete 0.39
220 TestMountStart/serial/Stop 1.34
221 TestMountStart/serial/RestartStopped 23.16
222 TestMountStart/serial/VerifyMountPostStop 0.4
225 TestMultiNode/serial/FreshStart2Nodes 119.96
226 TestMultiNode/serial/DeployApp2Nodes 5.81
227 TestMultiNode/serial/PingHostFrom2Pods 0.84
228 TestMultiNode/serial/AddNode 53.74
229 TestMultiNode/serial/MultiNodeLabels 0.06
230 TestMultiNode/serial/ProfileList 0.6
231 TestMultiNode/serial/CopyFile 7.58
232 TestMultiNode/serial/StopNode 2.45
233 TestMultiNode/serial/StartAfterStop 44.54
234 TestMultiNode/serial/RestartKeepsNodes 346.71
235 TestMultiNode/serial/DeleteNode 2.85
236 TestMultiNode/serial/StopMultiNode 182.11
237 TestMultiNode/serial/RestartMultiNode 115.37
238 TestMultiNode/serial/ValidateNameConflict 48.75
245 TestScheduledStopUnix 117
249 TestRunningBinaryUpgrade 239.98
254 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
255 TestNoKubernetes/serial/StartWithK8s 104.62
263 TestNetworkPlugins/group/false 3.22
268 TestPause/serial/Start 105.63
269 TestNoKubernetes/serial/StartWithStopK8s 68.93
270 TestNoKubernetes/serial/Start 28.82
272 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
273 TestNoKubernetes/serial/ProfileList 25.5
274 TestNoKubernetes/serial/Stop 1.31
275 TestNoKubernetes/serial/StartNoArgs 22.63
283 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.23
284 TestStoppedBinaryUpgrade/Setup 3.72
285 TestStoppedBinaryUpgrade/Upgrade 128.83
286 TestNetworkPlugins/group/auto/Start 60.17
287 TestNetworkPlugins/group/auto/KubeletFlags 0.4
288 TestNetworkPlugins/group/auto/NetCatPod 11.73
289 TestNetworkPlugins/group/auto/DNS 0.16
290 TestNetworkPlugins/group/auto/Localhost 0.13
291 TestNetworkPlugins/group/auto/HairPin 0.13
292 TestStoppedBinaryUpgrade/MinikubeLogs 1.18
293 TestNetworkPlugins/group/kindnet/Start 70.26
294 TestNetworkPlugins/group/calico/Start 98.96
295 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
296 TestNetworkPlugins/group/kindnet/KubeletFlags 0.26
297 TestNetworkPlugins/group/kindnet/NetCatPod 11.3
298 TestNetworkPlugins/group/custom-flannel/Start 74.33
299 TestNetworkPlugins/group/kindnet/DNS 0.19
300 TestNetworkPlugins/group/kindnet/Localhost 0.14
301 TestNetworkPlugins/group/kindnet/HairPin 0.17
302 TestNetworkPlugins/group/enable-default-cni/Start 93.68
303 TestNetworkPlugins/group/flannel/Start 104.23
304 TestNetworkPlugins/group/calico/ControllerPod 6.01
305 TestNetworkPlugins/group/calico/KubeletFlags 0.24
306 TestNetworkPlugins/group/calico/NetCatPod 12.27
307 TestNetworkPlugins/group/calico/DNS 0.18
308 TestNetworkPlugins/group/calico/Localhost 0.15
309 TestNetworkPlugins/group/calico/HairPin 0.14
310 TestNetworkPlugins/group/bridge/Start 105.14
311 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.24
312 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.29
313 TestNetworkPlugins/group/custom-flannel/DNS 0.2
314 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
315 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
318 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
319 TestNetworkPlugins/group/enable-default-cni/NetCatPod 14.4
320 TestNetworkPlugins/group/flannel/ControllerPod 6.01
321 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
322 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
323 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
324 TestNetworkPlugins/group/flannel/KubeletFlags 0.23
325 TestNetworkPlugins/group/flannel/NetCatPod 13.56
326 TestNetworkPlugins/group/flannel/DNS 0.2
327 TestNetworkPlugins/group/flannel/Localhost 0.17
328 TestNetworkPlugins/group/flannel/HairPin 0.16
330 TestStartStop/group/no-preload/serial/FirstStart 105.94
332 TestStartStop/group/embed-certs/serial/FirstStart 100.26
333 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
334 TestNetworkPlugins/group/bridge/NetCatPod 11.28
335 TestNetworkPlugins/group/bridge/DNS 0.17
336 TestNetworkPlugins/group/bridge/Localhost 0.15
337 TestNetworkPlugins/group/bridge/HairPin 0.14
339 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 61.27
340 TestStartStop/group/no-preload/serial/DeployApp 11.3
341 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.31
342 TestStartStop/group/embed-certs/serial/DeployApp 12.31
343 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.13
344 TestStartStop/group/no-preload/serial/Stop 90.88
345 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.02
346 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.04
347 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.02
348 TestStartStop/group/embed-certs/serial/Stop 91.04
349 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
350 TestStartStop/group/no-preload/serial/SecondStart 350.9
351 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
352 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 353.96
353 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
354 TestStartStop/group/embed-certs/serial/SecondStart 335.91
357 TestStartStop/group/old-k8s-version/serial/Stop 2.45
358 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
360 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
361 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 17.01
362 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
363 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.29
364 TestStartStop/group/embed-certs/serial/Pause 3.42
365 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 14.01
367 TestStartStop/group/newest-cni/serial/FirstStart 47.91
368 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
369 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
370 TestStartStop/group/no-preload/serial/Pause 3.21
371 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
372 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
373 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.91
374 TestStartStop/group/newest-cni/serial/DeployApp 0
375 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.2
376 TestStartStop/group/newest-cni/serial/Stop 11.36
377 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
378 TestStartStop/group/newest-cni/serial/SecondStart 37.16
379 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
380 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
381 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
382 TestStartStop/group/newest-cni/serial/Pause 2.61
x
+
TestDownloadOnly/v1.20.0/json-events (26.71s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-675121 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-675121 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (26.712348152s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (26.71s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0224 12:00:29.286524  894564 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0224 12:00:29.286650  894564 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20451-887294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-675121
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-675121: exit status 85 (69.748373ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-675121 | jenkins | v1.35.0 | 24 Feb 25 12:00 UTC |          |
	|         | -p download-only-675121        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/24 12:00:02
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0224 12:00:02.621468  894575 out.go:345] Setting OutFile to fd 1 ...
	I0224 12:00:02.621601  894575 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 12:00:02.621611  894575 out.go:358] Setting ErrFile to fd 2...
	I0224 12:00:02.621618  894575 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 12:00:02.621856  894575 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-887294/.minikube/bin
	W0224 12:00:02.621988  894575 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20451-887294/.minikube/config/config.json: open /home/jenkins/minikube-integration/20451-887294/.minikube/config/config.json: no such file or directory
	I0224 12:00:02.622574  894575 out.go:352] Setting JSON to true
	I0224 12:00:02.623618  894575 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":6144,"bootTime":1740392259,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0224 12:00:02.623734  894575 start.go:139] virtualization: kvm guest
	I0224 12:00:02.626356  894575 out.go:97] [download-only-675121] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0224 12:00:02.626509  894575 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20451-887294/.minikube/cache/preloaded-tarball: no such file or directory
	I0224 12:00:02.626561  894575 notify.go:220] Checking for updates...
	I0224 12:00:02.628524  894575 out.go:169] MINIKUBE_LOCATION=20451
	I0224 12:00:02.630122  894575 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 12:00:02.631783  894575 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20451-887294/kubeconfig
	I0224 12:00:02.633390  894575 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-887294/.minikube
	I0224 12:00:02.634987  894575 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0224 12:00:02.637705  894575 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0224 12:00:02.637944  894575 driver.go:394] Setting default libvirt URI to qemu:///system
	I0224 12:00:02.671954  894575 out.go:97] Using the kvm2 driver based on user configuration
	I0224 12:00:02.671997  894575 start.go:297] selected driver: kvm2
	I0224 12:00:02.672006  894575 start.go:901] validating driver "kvm2" against <nil>
	I0224 12:00:02.672378  894575 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 12:00:02.672471  894575 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20451-887294/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0224 12:00:02.689413  894575 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0224 12:00:02.689476  894575 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0224 12:00:02.690028  894575 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0224 12:00:02.690230  894575 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0224 12:00:02.690280  894575 cni.go:84] Creating CNI manager for ""
	I0224 12:00:02.690338  894575 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0224 12:00:02.690346  894575 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0224 12:00:02.690413  894575 start.go:340] cluster config:
	{Name:download-only-675121 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-675121 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0224 12:00:02.690606  894575 iso.go:125] acquiring lock: {Name:mk57408cca66a96a13d93cda43cdfac6e61aef3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 12:00:02.692779  894575 out.go:97] Downloading VM boot image ...
	I0224 12:00:02.692834  894575 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20451-887294/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0224 12:00:13.191304  894575 out.go:97] Starting "download-only-675121" primary control-plane node in "download-only-675121" cluster
	I0224 12:00:13.191341  894575 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0224 12:00:13.309817  894575 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0224 12:00:13.309861  894575 cache.go:56] Caching tarball of preloaded images
	I0224 12:00:13.310068  894575 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0224 12:00:13.312192  894575 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0224 12:00:13.312226  894575 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0224 12:00:13.424274  894575 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20451-887294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0224 12:00:27.447292  894575 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0224 12:00:27.447397  894575 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20451-887294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-675121 host does not exist
	  To start a cluster, run: "minikube start -p download-only-675121"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-675121
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/json-events (15.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-290273 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-290273 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (15.042844774s)
--- PASS: TestDownloadOnly/v1.32.2/json-events (15.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/preload-exists
I0224 12:00:44.699725  894564 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
I0224 12:00:44.699789  894564 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20451-887294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-290273
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-290273: exit status 85 (67.736676ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-675121 | jenkins | v1.35.0 | 24 Feb 25 12:00 UTC |                     |
	|         | -p download-only-675121        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 24 Feb 25 12:00 UTC | 24 Feb 25 12:00 UTC |
	| delete  | -p download-only-675121        | download-only-675121 | jenkins | v1.35.0 | 24 Feb 25 12:00 UTC | 24 Feb 25 12:00 UTC |
	| start   | -o=json --download-only        | download-only-290273 | jenkins | v1.35.0 | 24 Feb 25 12:00 UTC |                     |
	|         | -p download-only-290273        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/24 12:00:29
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0224 12:00:29.701057  894842 out.go:345] Setting OutFile to fd 1 ...
	I0224 12:00:29.701374  894842 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 12:00:29.701387  894842 out.go:358] Setting ErrFile to fd 2...
	I0224 12:00:29.701393  894842 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 12:00:29.701640  894842 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-887294/.minikube/bin
	I0224 12:00:29.702274  894842 out.go:352] Setting JSON to true
	I0224 12:00:29.703540  894842 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":6171,"bootTime":1740392259,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0224 12:00:29.703669  894842 start.go:139] virtualization: kvm guest
	I0224 12:00:29.705911  894842 out.go:97] [download-only-290273] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0224 12:00:29.706106  894842 notify.go:220] Checking for updates...
	I0224 12:00:29.707362  894842 out.go:169] MINIKUBE_LOCATION=20451
	I0224 12:00:29.709059  894842 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 12:00:29.710474  894842 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20451-887294/kubeconfig
	I0224 12:00:29.711942  894842 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-887294/.minikube
	I0224 12:00:29.713453  894842 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0224 12:00:29.716311  894842 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0224 12:00:29.716614  894842 driver.go:394] Setting default libvirt URI to qemu:///system
	I0224 12:00:29.750459  894842 out.go:97] Using the kvm2 driver based on user configuration
	I0224 12:00:29.750510  894842 start.go:297] selected driver: kvm2
	I0224 12:00:29.750517  894842 start.go:901] validating driver "kvm2" against <nil>
	I0224 12:00:29.750880  894842 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 12:00:29.750971  894842 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20451-887294/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0224 12:00:29.767588  894842 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0224 12:00:29.767673  894842 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0224 12:00:29.768243  894842 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0224 12:00:29.768398  894842 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0224 12:00:29.768442  894842 cni.go:84] Creating CNI manager for ""
	I0224 12:00:29.768493  894842 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0224 12:00:29.768502  894842 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0224 12:00:29.768553  894842 start.go:340] cluster config:
	{Name:download-only-290273 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:download-only-290273 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0224 12:00:29.768657  894842 iso.go:125] acquiring lock: {Name:mk57408cca66a96a13d93cda43cdfac6e61aef3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 12:00:29.770364  894842 out.go:97] Starting "download-only-290273" primary control-plane node in "download-only-290273" cluster
	I0224 12:00:29.770389  894842 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0224 12:00:30.397529  894842 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.2/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0224 12:00:30.397576  894842 cache.go:56] Caching tarball of preloaded images
	I0224 12:00:30.397781  894842 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0224 12:00:30.399848  894842 out.go:97] Downloading Kubernetes v1.32.2 preload ...
	I0224 12:00:30.399876  894842 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 ...
	I0224 12:00:30.512886  894842 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.2/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:a1ce605168a895ad5f3b3c8db1fe4d66 -> /home/jenkins/minikube-integration/20451-887294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-290273 host does not exist
	  To start a cluster, run: "minikube start -p download-only-290273"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.2/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-290273
--- PASS: TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.64s)

                                                
                                                
=== RUN   TestBinaryMirror
I0224 12:00:45.347326  894564 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-786462 --alsologtostderr --binary-mirror http://127.0.0.1:35645 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-786462" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-786462
--- PASS: TestBinaryMirror (0.64s)

                                                
                                    
x
+
TestOffline (88.18s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-226975 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-226975 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m26.708798597s)
helpers_test.go:175: Cleaning up "offline-crio-226975" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-226975
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-226975: (1.472922459s)
--- PASS: TestOffline (88.18s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-641952
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-641952: exit status 85 (58.437953ms)

                                                
                                                
-- stdout --
	* Profile "addons-641952" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-641952"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-641952
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-641952: exit status 85 (57.393897ms)

                                                
                                                
-- stdout --
	* Profile "addons-641952" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-641952"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (206.07s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-641952 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-641952 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m26.066622173s)
--- PASS: TestAddons/Setup (206.07s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-641952 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-641952 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.56s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-641952 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-641952 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1998d197-253b-4bf6-8a26-38cb3521fb90] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1998d197-253b-4bf6-8a26-38cb3521fb90] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.005551263s
addons_test.go:633: (dbg) Run:  kubectl --context addons-641952 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-641952 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-641952 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.56s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.32s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 4.820597ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-2zl8t" [149aa981-d7d4-42b6-945a-6ab73052301b] Running
I0224 12:04:30.623982  894564 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0224 12:04:30.624003  894564 kapi.go:107] duration metric: took 9.034607ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004504581s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-cqbs8" [3871d1d7-fffa-4ad5-b3a8-5e86e6392199] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00779826s
addons_test.go:331: (dbg) Run:  kubectl --context addons-641952 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-641952 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-641952 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.436473414s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-641952 ip
2025/02/24 12:04:46 [DEBUG] GET http://192.168.39.150:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-641952 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.32s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.23s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-tmpzm" [8eaeb53a-38d1-4574-8525-c0ea01a64403] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00445176s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-641952 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-641952 addons disable inspektor-gadget --alsologtostderr -v=1: (6.225069502s)
--- PASS: TestAddons/parallel/InspektorGadget (12.23s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.84s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 4.553501ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-wzbn9" [19f41d09-6274-428d-a8b3-7910f74ef377] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00606725s
addons_test.go:402: (dbg) Run:  kubectl --context addons-641952 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-641952 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.84s)

                                                
                                    
x
+
TestAddons/parallel/CSI (170.64s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:488: csi-hostpath-driver pods stabilized in 9.044593ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-641952 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-641952 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-641952 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-641952 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-641952 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-641952 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-641952 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-641952 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-641952 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-641952 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-641952 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-641952 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-641952 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [6a6de03e-e75f-451a-a835-26b8efc13f09] Pending
helpers_test.go:344: "task-pv-pod" [6a6de03e-e75f-451a-a835-26b8efc13f09] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [6a6de03e-e75f-451a-a835-26b8efc13f09] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 18.004400002s
addons_test.go:511: (dbg) Run:  kubectl --context addons-641952 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-641952 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-641952 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-641952 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-641952 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-641952 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-641952 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-641952 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-641952 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-641952 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-641952 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [cda1d6b9-eb2a-4868-9e8d-c3e12704c446] Pending
helpers_test.go:344: "task-pv-pod-restore" [cda1d6b9-eb2a-4868-9e8d-c3e12704c446] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [cda1d6b9-eb2a-4868-9e8d-c3e12704c446] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 2m8.004345215s
addons_test.go:553: (dbg) Run:  kubectl --context addons-641952 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-641952 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-641952 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-641952 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-641952 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-641952 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.057037237s)
--- PASS: TestAddons/parallel/CSI (170.64s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (141.5s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-641952 --alsologtostderr -v=1
addons_test.go:747: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-641952 --alsologtostderr -v=1: (1.312051902s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-nt2g6" [d5c67ff0-12f6-4f90-b72d-b5b97d137f55] Pending
helpers_test.go:344: "headlamp-5d4b5d7bd6-nt2g6" [d5c67ff0-12f6-4f90-b72d-b5b97d137f55] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-nt2g6" [d5c67ff0-12f6-4f90-b72d-b5b97d137f55] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 2m14.00407977s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-641952 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-641952 addons disable headlamp --alsologtostderr -v=1: (6.185903465s)
--- PASS: TestAddons/parallel/Headlamp (141.50s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.83s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-754dc876cd-5ndmw" [4a519373-b5a5-4494-99ba-3ce5af099878] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.008397298s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-641952 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.83s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (146.2s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-641952 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-641952 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-641952 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-641952 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-641952 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-641952 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-641952 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-641952 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-641952 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-641952 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-641952 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-641952 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-641952 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-641952 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-641952 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-641952 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-641952 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-641952 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [4ea1014c-001b-4bb4-9910-2e215fd59077] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [4ea1014c-001b-4bb4-9910-2e215fd59077] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [4ea1014c-001b-4bb4-9910-2e215fd59077] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 2m10.00327639s
addons_test.go:906: (dbg) Run:  kubectl --context addons-641952 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-641952 ssh "cat /opt/local-path-provisioner/pvc-cd8fcaca-bd54-49a6-9e22-383da91e5d0a_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-641952 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-641952 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-641952 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (146.20s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.54s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-4wfmt" [a145392e-b0c5-483f-a61a-74bd39d39553] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003903208s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-641952 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.54s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
I0224 12:04:30.614983  894564 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-g5vfc" [f9b53973-1cc3-4803-aa0c-80214d2c7afd] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.006366624s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-641952 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-641952 addons disable yakd --alsologtostderr -v=1: (5.818413716s)
--- PASS: TestAddons/parallel/Yakd (11.83s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.31s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-641952
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-641952: (1m30.997369269s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-641952
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-641952
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-641952
--- PASS: TestAddons/StoppedEnableDisable (91.31s)

                                                
                                    
x
+
TestCertOptions (89s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-746548 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E0224 13:11:29.922631  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/functional-892991/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:11:46.850040  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/functional-892991/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-746548 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m27.687641102s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-746548 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-746548 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-746548 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-746548" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-746548
--- PASS: TestCertOptions (89.00s)

                                                
                                    
x
+
TestCertExpiration (279.47s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-993480 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-993480 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m5.646755894s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-993480 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-993480 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (32.71859123s)
helpers_test.go:175: Cleaning up "cert-expiration-993480" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-993480
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-993480: (1.107089504s)
--- PASS: TestCertExpiration (279.47s)

                                                
                                    
x
+
TestForceSystemdFlag (54.45s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-705501 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-705501 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (53.41158736s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-705501 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-705501" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-705501
--- PASS: TestForceSystemdFlag (54.45s)

                                                
                                    
x
+
TestForceSystemdEnv (48.8s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-302174 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-302174 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (47.788063408s)
helpers_test.go:175: Cleaning up "force-systemd-env-302174" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-302174
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-302174: (1.014084606s)
--- PASS: TestForceSystemdEnv (48.80s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.76s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0224 13:08:03.325628  894564 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0224 13:08:03.325839  894564 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0224 13:08:03.367377  894564 install.go:62] docker-machine-driver-kvm2: exit status 1
W0224 13:08:03.367755  894564 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0224 13:08:03.367849  894564 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3952201273/001/docker-machine-driver-kvm2
I0224 13:08:03.683716  894564 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3952201273/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x54882a0 0x54882a0 0x54882a0 0x54882a0 0x54882a0 0x54882a0 0x54882a0] Decompressors:map[bz2:0xc00057f0d8 gz:0xc00057f190 tar:0xc00057f120 tar.bz2:0xc00057f130 tar.gz:0xc00057f140 tar.xz:0xc00057f150 tar.zst:0xc00057f170 tbz2:0xc00057f130 tgz:0xc00057f140 txz:0xc00057f150 tzst:0xc00057f170 xz:0xc00057f198 zip:0xc00057f1a0 zst:0xc00057f1b0] Getters:map[file:0xc001c038d0 http:0xc0005a73b0 https:0xc0005a7400] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0224 13:08:03.683789  894564 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3952201273/001/docker-machine-driver-kvm2
I0224 13:08:06.386754  894564 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0224 13:08:06.386852  894564 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0224 13:08:06.427565  894564 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0224 13:08:06.427600  894564 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0224 13:08:06.427670  894564 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0224 13:08:06.427707  894564 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3952201273/002/docker-machine-driver-kvm2
I0224 13:08:06.486456  894564 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3952201273/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x54882a0 0x54882a0 0x54882a0 0x54882a0 0x54882a0 0x54882a0 0x54882a0] Decompressors:map[bz2:0xc00057f0d8 gz:0xc00057f190 tar:0xc00057f120 tar.bz2:0xc00057f130 tar.gz:0xc00057f140 tar.xz:0xc00057f150 tar.zst:0xc00057f170 tbz2:0xc00057f130 tgz:0xc00057f140 txz:0xc00057f150 tzst:0xc00057f170 xz:0xc00057f198 zip:0xc00057f1a0 zst:0xc00057f1b0] Getters:map[file:0xc000259540 http:0xc000785450 https:0xc0007854a0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0224 13:08:06.486529  894564 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3952201273/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (4.76s)

                                                
                                    
x
+
TestErrorSpam/setup (43.2s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-602737 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-602737 --driver=kvm2  --container-runtime=crio
E0224 12:09:12.777643  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:09:12.784110  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:09:12.795485  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:09:12.816987  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:09:12.858529  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:09:12.940100  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:09:13.101776  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:09:13.423584  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:09:14.065760  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:09:15.347426  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:09:17.910380  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:09:23.032199  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:09:33.274387  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-602737 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-602737 --driver=kvm2  --container-runtime=crio: (43.199240685s)
--- PASS: TestErrorSpam/setup (43.20s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-602737 --log_dir /tmp/nospam-602737 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-602737 --log_dir /tmp/nospam-602737 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-602737 --log_dir /tmp/nospam-602737 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-602737 --log_dir /tmp/nospam-602737 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-602737 --log_dir /tmp/nospam-602737 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-602737 --log_dir /tmp/nospam-602737 status
--- PASS: TestErrorSpam/status (0.77s)

                                                
                                    
x
+
TestErrorSpam/pause (1.67s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-602737 --log_dir /tmp/nospam-602737 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-602737 --log_dir /tmp/nospam-602737 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-602737 --log_dir /tmp/nospam-602737 pause
--- PASS: TestErrorSpam/pause (1.67s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.74s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-602737 --log_dir /tmp/nospam-602737 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-602737 --log_dir /tmp/nospam-602737 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-602737 --log_dir /tmp/nospam-602737 unpause
--- PASS: TestErrorSpam/unpause (1.74s)

                                                
                                    
x
+
TestErrorSpam/stop (5.31s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-602737 --log_dir /tmp/nospam-602737 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-602737 --log_dir /tmp/nospam-602737 stop: (2.315306975s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-602737 --log_dir /tmp/nospam-602737 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-602737 --log_dir /tmp/nospam-602737 stop
E0224 12:09:53.756111  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-602737 --log_dir /tmp/nospam-602737 stop: (2.025270753s)
--- PASS: TestErrorSpam/stop (5.31s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20451-887294/.minikube/files/etc/test/nested/copy/894564/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (84.53s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 start -p functional-892991 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0224 12:10:34.718549  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2251: (dbg) Done: out/minikube-linux-amd64 start -p functional-892991 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m24.532779738s)
--- PASS: TestFunctional/serial/StartWithProxy (84.53s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (34.78s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0224 12:11:19.220688  894564 config.go:182] Loaded profile config "functional-892991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
functional_test.go:676: (dbg) Run:  out/minikube-linux-amd64 start -p functional-892991 --alsologtostderr -v=8
functional_test.go:676: (dbg) Done: out/minikube-linux-amd64 start -p functional-892991 --alsologtostderr -v=8: (34.782293374s)
functional_test.go:680: soft start took 34.783126738s for "functional-892991" cluster.
I0224 12:11:54.003371  894564 config.go:182] Loaded profile config "functional-892991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/SoftStart (34.78s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-892991 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-892991 cache add registry.k8s.io/pause:3.1: (1.04015829s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-892991 cache add registry.k8s.io/pause:3.3: (1.143696129s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 cache add registry.k8s.io/pause:latest
E0224 12:11:56.640585  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-892991 cache add registry.k8s.io/pause:latest: (1.117830676s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-892991 /tmp/TestFunctionalserialCacheCmdcacheadd_local2592028400/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 cache add minikube-local-cache-test:functional-892991
functional_test.go:1106: (dbg) Done: out/minikube-linux-amd64 -p functional-892991 cache add minikube-local-cache-test:functional-892991: (1.96751321s)
functional_test.go:1111: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 cache delete minikube-local-cache-test:functional-892991
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-892991
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-892991 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (230.78324ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 cache reload
functional_test.go:1180: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 kubectl -- --context functional-892991 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-892991 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (276.35s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-amd64 start -p functional-892991 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0224 12:14:12.778588  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:14:40.489007  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:774: (dbg) Done: out/minikube-linux-amd64 start -p functional-892991 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (4m36.351787989s)
functional_test.go:778: restart took 4m36.351965917s for "functional-892991" cluster.
I0224 12:16:38.544082  894564 config.go:182] Loaded profile config "functional-892991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/ExtraConfig (276.35s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-892991 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.64s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 logs
functional_test.go:1253: (dbg) Done: out/minikube-linux-amd64 -p functional-892991 logs: (1.634632828s)
--- PASS: TestFunctional/serial/LogsCmd (1.64s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.62s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 logs --file /tmp/TestFunctionalserialLogsFileCmd3110855653/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-amd64 -p functional-892991 logs --file /tmp/TestFunctionalserialLogsFileCmd3110855653/001/logs.txt: (1.613946236s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.62s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.27s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-892991 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-892991
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-892991: exit status 115 (287.631575ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.143:31868 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-892991 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.27s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-892991 config get cpus: exit status 14 (68.348355ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-892991 config get cpus: exit status 14 (60.447598ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-892991 --alsologtostderr -v=1]
functional_test.go:927: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-892991 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 904040: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.41s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-892991 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-892991 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (374.292918ms)

                                                
                                                
-- stdout --
	* [functional-892991] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20451
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20451-887294/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-887294/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0224 12:17:06.600093  903520 out.go:345] Setting OutFile to fd 1 ...
	I0224 12:17:06.600397  903520 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 12:17:06.600410  903520 out.go:358] Setting ErrFile to fd 2...
	I0224 12:17:06.600417  903520 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 12:17:06.600753  903520 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-887294/.minikube/bin
	I0224 12:17:06.601555  903520 out.go:352] Setting JSON to false
	I0224 12:17:06.602953  903520 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":7168,"bootTime":1740392259,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0224 12:17:06.603106  903520 start.go:139] virtualization: kvm guest
	I0224 12:17:06.605606  903520 out.go:177] * [functional-892991] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0224 12:17:06.607274  903520 out.go:177]   - MINIKUBE_LOCATION=20451
	I0224 12:17:06.607318  903520 notify.go:220] Checking for updates...
	I0224 12:17:06.610207  903520 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 12:17:06.611745  903520 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20451-887294/kubeconfig
	I0224 12:17:06.613338  903520 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-887294/.minikube
	I0224 12:17:06.614648  903520 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0224 12:17:06.615785  903520 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0224 12:17:06.617365  903520 config.go:182] Loaded profile config "functional-892991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0224 12:17:06.617792  903520 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:17:06.617850  903520 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:17:06.634692  903520 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38473
	I0224 12:17:06.635322  903520 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:17:06.635997  903520 main.go:141] libmachine: Using API Version  1
	I0224 12:17:06.636020  903520 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:17:06.636466  903520 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:17:06.636685  903520 main.go:141] libmachine: (functional-892991) Calling .DriverName
	I0224 12:17:06.637005  903520 driver.go:394] Setting default libvirt URI to qemu:///system
	I0224 12:17:06.637501  903520 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:17:06.637551  903520 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:17:06.654247  903520 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34091
	I0224 12:17:06.654733  903520 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:17:06.655311  903520 main.go:141] libmachine: Using API Version  1
	I0224 12:17:06.655334  903520 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:17:06.655738  903520 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:17:06.655985  903520 main.go:141] libmachine: (functional-892991) Calling .DriverName
	I0224 12:17:06.805928  903520 out.go:177] * Using the kvm2 driver based on existing profile
	I0224 12:17:06.902719  903520 start.go:297] selected driver: kvm2
	I0224 12:17:06.902756  903520 start.go:901] validating driver "kvm2" against &{Name:functional-892991 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:functional-892991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.143 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0224 12:17:06.902881  903520 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0224 12:17:06.905385  903520 out.go:201] 
	W0224 12:17:06.907240  903520 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0224 12:17:06.908525  903520 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-amd64 start -p functional-892991 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 start -p functional-892991 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-892991 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (151.859999ms)

                                                
                                                
-- stdout --
	* [functional-892991] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20451
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20451-887294/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-887294/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0224 12:17:07.122209  903576 out.go:345] Setting OutFile to fd 1 ...
	I0224 12:17:07.122353  903576 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 12:17:07.122363  903576 out.go:358] Setting ErrFile to fd 2...
	I0224 12:17:07.122369  903576 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 12:17:07.123382  903576 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-887294/.minikube/bin
	I0224 12:17:07.124334  903576 out.go:352] Setting JSON to false
	I0224 12:17:07.125429  903576 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":7168,"bootTime":1740392259,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0224 12:17:07.125555  903576 start.go:139] virtualization: kvm guest
	I0224 12:17:07.127823  903576 out.go:177] * [functional-892991] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0224 12:17:07.129715  903576 out.go:177]   - MINIKUBE_LOCATION=20451
	I0224 12:17:07.129751  903576 notify.go:220] Checking for updates...
	I0224 12:17:07.132344  903576 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 12:17:07.133824  903576 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20451-887294/kubeconfig
	I0224 12:17:07.135170  903576 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-887294/.minikube
	I0224 12:17:07.136321  903576 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0224 12:17:07.137525  903576 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0224 12:17:07.139308  903576 config.go:182] Loaded profile config "functional-892991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0224 12:17:07.139712  903576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:17:07.139776  903576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:17:07.157269  903576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37669
	I0224 12:17:07.157839  903576 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:17:07.158489  903576 main.go:141] libmachine: Using API Version  1
	I0224 12:17:07.158534  903576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:17:07.158979  903576 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:17:07.159233  903576 main.go:141] libmachine: (functional-892991) Calling .DriverName
	I0224 12:17:07.159546  903576 driver.go:394] Setting default libvirt URI to qemu:///system
	I0224 12:17:07.159907  903576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:17:07.159965  903576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:17:07.175860  903576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44635
	I0224 12:17:07.176402  903576 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:17:07.176988  903576 main.go:141] libmachine: Using API Version  1
	I0224 12:17:07.177019  903576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:17:07.177403  903576 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:17:07.177616  903576 main.go:141] libmachine: (functional-892991) Calling .DriverName
	I0224 12:17:07.214123  903576 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0224 12:17:07.215444  903576 start.go:297] selected driver: kvm2
	I0224 12:17:07.215458  903576 start.go:901] validating driver "kvm2" against &{Name:functional-892991 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:functional-892991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.143 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0224 12:17:07.215589  903576 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0224 12:17:07.217716  903576 out.go:201] 
	W0224 12:17:07.219023  903576 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0224 12:17:07.220385  903576 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1646: (dbg) Run:  kubectl --context functional-892991 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-892991 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-g8nkx" [5ba36bc8-e978-40f8-abed-5dd68b8a7af0] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-g8nkx" [5ba36bc8-e978-40f8-abed-5dd68b8a7af0] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.401290591s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.39.143:30430
functional_test.go:1692: http://192.168.39.143:30430: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-g8nkx

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.143:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.143:30430
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.26s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (48.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [b9d018f5-7871-4ba8-a9e1-725369ce16d2] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004128477s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-892991 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-892991 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-892991 get pvc myclaim -o=json
I0224 12:16:53.909192  894564 retry.go:31] will retry after 2.752115849s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:14e1952f-492e-4785-b4e5-5843c40aa4c7 ResourceVersion:514 Generation:0 CreationTimestamp:2025-02-24 12:16:53 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName:pvc-14e1952f-492e-4785-b4e5-5843c40aa4c7 StorageClassName:0xc001d22cd0 VolumeMode:0xc001d22ce0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-892991 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-892991 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b761dd42-8f03-4af6-8962-7fe074b0f1c6] Pending
helpers_test.go:344: "sp-pod" [b761dd42-8f03-4af6-8962-7fe074b0f1c6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b761dd42-8f03-4af6-8962-7fe074b0f1c6] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.004149431s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-892991 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-892991 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-892991 delete -f testdata/storage-provisioner/pod.yaml: (1.694164451s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-892991 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8c64fed9-7a4a-48b9-b00f-90caed9a06d8] Pending
helpers_test.go:344: "sp-pod" [8c64fed9-7a4a-48b9-b00f-90caed9a06d8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8c64fed9-7a4a-48b9-b00f-90caed9a06d8] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.006922534s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-892991 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (48.74s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 ssh -n functional-892991 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 cp functional-892991:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2001532170/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 ssh -n functional-892991 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 ssh -n functional-892991 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1810: (dbg) Run:  kubectl --context functional-892991 replace --force -f testdata/mysql.yaml
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-hpt8x" [ac0f8b25-84ef-4387-8e67-608f56ea8d60] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-hpt8x" [ac0f8b25-84ef-4387-8e67-608f56ea8d60] Running
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.328536409s
functional_test.go:1824: (dbg) Run:  kubectl --context functional-892991 exec mysql-58ccfd96bb-hpt8x -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-892991 exec mysql-58ccfd96bb-hpt8x -- mysql -ppassword -e "show databases;": exit status 1 (357.931711ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0224 12:17:13.012765  894564 retry.go:31] will retry after 963.342901ms: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-892991 exec mysql-58ccfd96bb-hpt8x -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-892991 exec mysql-58ccfd96bb-hpt8x -- mysql -ppassword -e "show databases;": exit status 1 (218.007966ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0224 12:17:14.195328  894564 retry.go:31] will retry after 2.150980032s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-892991 exec mysql-58ccfd96bb-hpt8x -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.38s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/894564/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 ssh "sudo cat /etc/test/nested/copy/894564/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/894564.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 ssh "sudo cat /etc/ssl/certs/894564.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/894564.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 ssh "sudo cat /usr/share/ca-certificates/894564.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/8945642.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 ssh "sudo cat /etc/ssl/certs/8945642.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/8945642.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 ssh "sudo cat /usr/share/ca-certificates/8945642.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-892991 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 ssh "sudo systemctl is-active docker"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-892991 ssh "sudo systemctl is-active docker": exit status 1 (250.400754ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 ssh "sudo systemctl is-active containerd"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-892991 ssh "sudo systemctl is-active containerd": exit status 1 (220.487098ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-892991 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.2
registry.k8s.io/kube-proxy:v1.32.2
registry.k8s.io/kube-controller-manager:v1.32.2
registry.k8s.io/kube-apiserver:v1.32.2
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-892991
localhost/kicbase/echo-server:functional-892991
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20241212-9f82dd49
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-892991 image ls --format short --alsologtostderr:
I0224 12:17:17.501720  904184 out.go:345] Setting OutFile to fd 1 ...
I0224 12:17:17.502324  904184 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0224 12:17:17.502345  904184 out.go:358] Setting ErrFile to fd 2...
I0224 12:17:17.502352  904184 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0224 12:17:17.502735  904184 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-887294/.minikube/bin
I0224 12:17:17.503724  904184 config.go:182] Loaded profile config "functional-892991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0224 12:17:17.503902  904184 config.go:182] Loaded profile config "functional-892991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0224 12:17:17.504560  904184 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0224 12:17:17.504632  904184 main.go:141] libmachine: Launching plugin server for driver kvm2
I0224 12:17:17.520500  904184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40439
I0224 12:17:17.521035  904184 main.go:141] libmachine: () Calling .GetVersion
I0224 12:17:17.521711  904184 main.go:141] libmachine: Using API Version  1
I0224 12:17:17.521746  904184 main.go:141] libmachine: () Calling .SetConfigRaw
I0224 12:17:17.522096  904184 main.go:141] libmachine: () Calling .GetMachineName
I0224 12:17:17.522308  904184 main.go:141] libmachine: (functional-892991) Calling .GetState
I0224 12:17:17.524579  904184 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0224 12:17:17.524646  904184 main.go:141] libmachine: Launching plugin server for driver kvm2
I0224 12:17:17.540355  904184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39079
I0224 12:17:17.540915  904184 main.go:141] libmachine: () Calling .GetVersion
I0224 12:17:17.541482  904184 main.go:141] libmachine: Using API Version  1
I0224 12:17:17.541506  904184 main.go:141] libmachine: () Calling .SetConfigRaw
I0224 12:17:17.541912  904184 main.go:141] libmachine: () Calling .GetMachineName
I0224 12:17:17.542142  904184 main.go:141] libmachine: (functional-892991) Calling .DriverName
I0224 12:17:17.542376  904184 ssh_runner.go:195] Run: systemctl --version
I0224 12:17:17.542401  904184 main.go:141] libmachine: (functional-892991) Calling .GetSSHHostname
I0224 12:17:17.545219  904184 main.go:141] libmachine: (functional-892991) DBG | domain functional-892991 has defined MAC address 52:54:00:b4:04:c3 in network mk-functional-892991
I0224 12:17:17.545663  904184 main.go:141] libmachine: (functional-892991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:04:c3", ip: ""} in network mk-functional-892991: {Iface:virbr1 ExpiryTime:2025-02-24 13:10:10 +0000 UTC Type:0 Mac:52:54:00:b4:04:c3 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:functional-892991 Clientid:01:52:54:00:b4:04:c3}
I0224 12:17:17.545699  904184 main.go:141] libmachine: (functional-892991) DBG | domain functional-892991 has defined IP address 192.168.39.143 and MAC address 52:54:00:b4:04:c3 in network mk-functional-892991
I0224 12:17:17.545798  904184 main.go:141] libmachine: (functional-892991) Calling .GetSSHPort
I0224 12:17:17.545996  904184 main.go:141] libmachine: (functional-892991) Calling .GetSSHKeyPath
I0224 12:17:17.546159  904184 main.go:141] libmachine: (functional-892991) Calling .GetSSHUsername
I0224 12:17:17.546267  904184 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/functional-892991/id_rsa Username:docker}
I0224 12:17:17.632830  904184 ssh_runner.go:195] Run: sudo crictl images --output json
I0224 12:17:17.675689  904184 main.go:141] libmachine: Making call to close driver server
I0224 12:17:17.675705  904184 main.go:141] libmachine: (functional-892991) Calling .Close
I0224 12:17:17.676067  904184 main.go:141] libmachine: (functional-892991) DBG | Closing plugin on server side
I0224 12:17:17.676098  904184 main.go:141] libmachine: Successfully made call to close driver server
I0224 12:17:17.676108  904184 main.go:141] libmachine: Making call to close connection to plugin binary
I0224 12:17:17.676119  904184 main.go:141] libmachine: Making call to close driver server
I0224 12:17:17.676127  904184 main.go:141] libmachine: (functional-892991) Calling .Close
I0224 12:17:17.676406  904184 main.go:141] libmachine: Successfully made call to close driver server
I0224 12:17:17.676419  904184 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-892991 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-controller-manager | v1.32.2            | b6a454c5a800d | 90.8MB |
| registry.k8s.io/kube-proxy              | v1.32.2            | f1332858868e1 | 95.3MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/library/nginx                 | latest             | 97662d24417b3 | 196MB  |
| localhost/kicbase/echo-server           | functional-892991  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.32.2            | 85b7a174738ba | 98.1MB |
| registry.k8s.io/kube-scheduler          | v1.32.2            | d8e673e7c9983 | 70.7MB |
| docker.io/kindest/kindnetd              | v20241212-9f82dd49 | d300845f67aeb | 95.7MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/minikube-local-cache-test     | functional-892991  | 0a05e4c820e78 | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/etcd                    | 3.5.16-0           | a9e7e6b294baf | 151MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-892991 image ls --format table --alsologtostderr:
I0224 12:17:24.722723  904797 out.go:345] Setting OutFile to fd 1 ...
I0224 12:17:24.723311  904797 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0224 12:17:24.723326  904797 out.go:358] Setting ErrFile to fd 2...
I0224 12:17:24.723333  904797 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0224 12:17:24.723811  904797 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-887294/.minikube/bin
I0224 12:17:24.725432  904797 config.go:182] Loaded profile config "functional-892991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0224 12:17:24.725661  904797 config.go:182] Loaded profile config "functional-892991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0224 12:17:24.726288  904797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0224 12:17:24.726370  904797 main.go:141] libmachine: Launching plugin server for driver kvm2
I0224 12:17:24.743649  904797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42357
I0224 12:17:24.744204  904797 main.go:141] libmachine: () Calling .GetVersion
I0224 12:17:24.744830  904797 main.go:141] libmachine: Using API Version  1
I0224 12:17:24.744856  904797 main.go:141] libmachine: () Calling .SetConfigRaw
I0224 12:17:24.745391  904797 main.go:141] libmachine: () Calling .GetMachineName
I0224 12:17:24.745696  904797 main.go:141] libmachine: (functional-892991) Calling .GetState
I0224 12:17:24.747743  904797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0224 12:17:24.747799  904797 main.go:141] libmachine: Launching plugin server for driver kvm2
I0224 12:17:24.764560  904797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33739
I0224 12:17:24.765087  904797 main.go:141] libmachine: () Calling .GetVersion
I0224 12:17:24.765773  904797 main.go:141] libmachine: Using API Version  1
I0224 12:17:24.765813  904797 main.go:141] libmachine: () Calling .SetConfigRaw
I0224 12:17:24.766196  904797 main.go:141] libmachine: () Calling .GetMachineName
I0224 12:17:24.766448  904797 main.go:141] libmachine: (functional-892991) Calling .DriverName
I0224 12:17:24.766673  904797 ssh_runner.go:195] Run: systemctl --version
I0224 12:17:24.766701  904797 main.go:141] libmachine: (functional-892991) Calling .GetSSHHostname
I0224 12:17:24.769882  904797 main.go:141] libmachine: (functional-892991) DBG | domain functional-892991 has defined MAC address 52:54:00:b4:04:c3 in network mk-functional-892991
I0224 12:17:24.770327  904797 main.go:141] libmachine: (functional-892991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:04:c3", ip: ""} in network mk-functional-892991: {Iface:virbr1 ExpiryTime:2025-02-24 13:10:10 +0000 UTC Type:0 Mac:52:54:00:b4:04:c3 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:functional-892991 Clientid:01:52:54:00:b4:04:c3}
I0224 12:17:24.770360  904797 main.go:141] libmachine: (functional-892991) DBG | domain functional-892991 has defined IP address 192.168.39.143 and MAC address 52:54:00:b4:04:c3 in network mk-functional-892991
I0224 12:17:24.770535  904797 main.go:141] libmachine: (functional-892991) Calling .GetSSHPort
I0224 12:17:24.770781  904797 main.go:141] libmachine: (functional-892991) Calling .GetSSHKeyPath
I0224 12:17:24.770952  904797 main.go:141] libmachine: (functional-892991) Calling .GetSSHUsername
I0224 12:17:24.771095  904797 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/functional-892991/id_rsa Username:docker}
I0224 12:17:24.934346  904797 ssh_runner.go:195] Run: sudo crictl images --output json
I0224 12:17:25.085045  904797 main.go:141] libmachine: Making call to close driver server
I0224 12:17:25.085071  904797 main.go:141] libmachine: (functional-892991) Calling .Close
I0224 12:17:25.085429  904797 main.go:141] libmachine: Successfully made call to close driver server
I0224 12:17:25.085451  904797 main.go:141] libmachine: Making call to close connection to plugin binary
I0224 12:17:25.085462  904797 main.go:141] libmachine: Making call to close driver server
I0224 12:17:25.085463  904797 main.go:141] libmachine: (functional-892991) DBG | Closing plugin on server side
I0224 12:17:25.085471  904797 main.go:141] libmachine: (functional-892991) Calling .Close
I0224 12:17:25.085761  904797 main.go:141] libmachine: Successfully made call to close driver server
I0224 12:17:25.085775  904797 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-892991 image ls --format json --alsologtostderr:
[{"id":"f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5","repoDigests":["registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d","registry.k8s.io/kube-proxy@sha256:ab90de2ec2cbade95df799a63d85e438f51817055ecee067b694fdd0f776e15d"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.2"],"size":"95271321"},{"id":"d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56","repoDigests":["docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26","docker.io/kindest/kindnetd@sha256:a3c74735c5fc7cab683a2f94dddec913052aacaa8d8b773c88d428e8dee3dd40"],"repoTags":["docker.io/kindest/kindnetd:v20241212-9f82dd49"],"size":"95714353"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"]
,"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990","registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"151021823"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-892991"]
,"size":"4943877"},{"id":"0a05e4c820e781d59791998329c7d2d388fae4f2efc06a2a7986d27d49008024","repoDigests":["localhost/minikube-local-cache-test@sha256:f4bb4cd2a9667bdcb3fa35302ecdeda4052df58a1dd564c91ef8c02e4ee2b3b9"],"repoTags":["localhost/minikube-local-cache-test:functional-892991"],"size":"3330"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"
85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef","repoDigests":["registry.k8s.io/kube-apiserver@sha256:48e677803a23233a10a796f3d7edc73223e3fbaceb6113665c1015464a743e9d","registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.2"],"size":"98055648"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:01669d976f198e210414e4864454330f6cbd4e5fedf1570b0340d206442f2ae5","registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.2"],"size":"90793286"},{"id":"d8e673e7c9
983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76","registry.k8s.io/kube-scheduler@sha256:c98f93221ffa10bfb46b85966915759dbcaf957098364763242e814fee84363b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.2"],"size":"70653254"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"97662d24417b316f60607afbca9f226a2ba58f09d642f27b8e197a89859ddc8e","repoDigests":["docker.io/library/nginx@sha256:088eea90c3d0a540ee5686e7d7471acbd4063b6e97eaf49b5e651665eb7f
4dc7","docker.io/library/nginx@sha256:91734281c0ebfc6f1aea979cffeed5079cfe786228a71cc6f1f46a228cde6e34"],"repoTags":["docker.io/library/nginx:latest"],"size":"196149140"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["reg
istry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-892991 image ls --format json --alsologtostderr:
I0224 12:17:24.036170  904773 out.go:345] Setting OutFile to fd 1 ...
I0224 12:17:24.036298  904773 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0224 12:17:24.036307  904773 out.go:358] Setting ErrFile to fd 2...
I0224 12:17:24.036312  904773 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0224 12:17:24.036553  904773 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-887294/.minikube/bin
I0224 12:17:24.037182  904773 config.go:182] Loaded profile config "functional-892991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0224 12:17:24.037288  904773 config.go:182] Loaded profile config "functional-892991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0224 12:17:24.037734  904773 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0224 12:17:24.037828  904773 main.go:141] libmachine: Launching plugin server for driver kvm2
I0224 12:17:24.053408  904773 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34627
I0224 12:17:24.053939  904773 main.go:141] libmachine: () Calling .GetVersion
I0224 12:17:24.054573  904773 main.go:141] libmachine: Using API Version  1
I0224 12:17:24.054600  904773 main.go:141] libmachine: () Calling .SetConfigRaw
I0224 12:17:24.054933  904773 main.go:141] libmachine: () Calling .GetMachineName
I0224 12:17:24.055158  904773 main.go:141] libmachine: (functional-892991) Calling .GetState
I0224 12:17:24.057202  904773 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0224 12:17:24.057254  904773 main.go:141] libmachine: Launching plugin server for driver kvm2
I0224 12:17:24.072666  904773 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41821
I0224 12:17:24.073188  904773 main.go:141] libmachine: () Calling .GetVersion
I0224 12:17:24.073766  904773 main.go:141] libmachine: Using API Version  1
I0224 12:17:24.073793  904773 main.go:141] libmachine: () Calling .SetConfigRaw
I0224 12:17:24.074144  904773 main.go:141] libmachine: () Calling .GetMachineName
I0224 12:17:24.074329  904773 main.go:141] libmachine: (functional-892991) Calling .DriverName
I0224 12:17:24.074533  904773 ssh_runner.go:195] Run: systemctl --version
I0224 12:17:24.074569  904773 main.go:141] libmachine: (functional-892991) Calling .GetSSHHostname
I0224 12:17:24.077044  904773 main.go:141] libmachine: (functional-892991) DBG | domain functional-892991 has defined MAC address 52:54:00:b4:04:c3 in network mk-functional-892991
I0224 12:17:24.077412  904773 main.go:141] libmachine: (functional-892991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:04:c3", ip: ""} in network mk-functional-892991: {Iface:virbr1 ExpiryTime:2025-02-24 13:10:10 +0000 UTC Type:0 Mac:52:54:00:b4:04:c3 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:functional-892991 Clientid:01:52:54:00:b4:04:c3}
I0224 12:17:24.077450  904773 main.go:141] libmachine: (functional-892991) DBG | domain functional-892991 has defined IP address 192.168.39.143 and MAC address 52:54:00:b4:04:c3 in network mk-functional-892991
I0224 12:17:24.077612  904773 main.go:141] libmachine: (functional-892991) Calling .GetSSHPort
I0224 12:17:24.077762  904773 main.go:141] libmachine: (functional-892991) Calling .GetSSHKeyPath
I0224 12:17:24.077906  904773 main.go:141] libmachine: (functional-892991) Calling .GetSSHUsername
I0224 12:17:24.078046  904773 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/functional-892991/id_rsa Username:docker}
I0224 12:17:24.201009  904773 ssh_runner.go:195] Run: sudo crictl images --output json
I0224 12:17:24.660234  904773 main.go:141] libmachine: Making call to close driver server
I0224 12:17:24.660260  904773 main.go:141] libmachine: (functional-892991) Calling .Close
I0224 12:17:24.660591  904773 main.go:141] libmachine: Successfully made call to close driver server
I0224 12:17:24.660610  904773 main.go:141] libmachine: Making call to close connection to plugin binary
I0224 12:17:24.660626  904773 main.go:141] libmachine: Making call to close driver server
I0224 12:17:24.660635  904773 main.go:141] libmachine: (functional-892991) Calling .Close
I0224 12:17:24.660636  904773 main.go:141] libmachine: (functional-892991) DBG | Closing plugin on server side
I0224 12:17:24.660890  904773 main.go:141] libmachine: (functional-892991) DBG | Closing plugin on server side
I0224 12:17:24.660925  904773 main.go:141] libmachine: Successfully made call to close driver server
I0224 12:17:24.660942  904773 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-892991 image ls --format yaml --alsologtostderr:
- id: f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5
repoDigests:
- registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d
- registry.k8s.io/kube-proxy@sha256:ab90de2ec2cbade95df799a63d85e438f51817055ecee067b694fdd0f776e15d
repoTags:
- registry.k8s.io/kube-proxy:v1.32.2
size: "95271321"
- id: d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76
- registry.k8s.io/kube-scheduler@sha256:c98f93221ffa10bfb46b85966915759dbcaf957098364763242e814fee84363b
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.2
size: "70653254"
- id: 97662d24417b316f60607afbca9f226a2ba58f09d642f27b8e197a89859ddc8e
repoDigests:
- docker.io/library/nginx@sha256:088eea90c3d0a540ee5686e7d7471acbd4063b6e97eaf49b5e651665eb7f4dc7
- docker.io/library/nginx@sha256:91734281c0ebfc6f1aea979cffeed5079cfe786228a71cc6f1f46a228cde6e34
repoTags:
- docker.io/library/nginx:latest
size: "196149140"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 0a05e4c820e781d59791998329c7d2d388fae4f2efc06a2a7986d27d49008024
repoDigests:
- localhost/minikube-local-cache-test@sha256:f4bb4cd2a9667bdcb3fa35302ecdeda4052df58a1dd564c91ef8c02e4ee2b3b9
repoTags:
- localhost/minikube-local-cache-test:functional-892991
size: "3330"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:48e677803a23233a10a796f3d7edc73223e3fbaceb6113665c1015464a743e9d
- registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.2
size: "98055648"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56
repoDigests:
- docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26
- docker.io/kindest/kindnetd@sha256:a3c74735c5fc7cab683a2f94dddec913052aacaa8d8b773c88d428e8dee3dd40
repoTags:
- docker.io/kindest/kindnetd:v20241212-9f82dd49
size: "95714353"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-892991
size: "4943877"
- id: b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:01669d976f198e210414e4864454330f6cbd4e5fedf1570b0340d206442f2ae5
- registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.2
size: "90793286"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "151021823"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-892991 image ls --format yaml --alsologtostderr:
I0224 12:17:17.732080  904208 out.go:345] Setting OutFile to fd 1 ...
I0224 12:17:17.732243  904208 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0224 12:17:17.732255  904208 out.go:358] Setting ErrFile to fd 2...
I0224 12:17:17.732262  904208 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0224 12:17:17.732509  904208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-887294/.minikube/bin
I0224 12:17:17.733153  904208 config.go:182] Loaded profile config "functional-892991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0224 12:17:17.733292  904208 config.go:182] Loaded profile config "functional-892991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0224 12:17:17.733692  904208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0224 12:17:17.733755  904208 main.go:141] libmachine: Launching plugin server for driver kvm2
I0224 12:17:17.749456  904208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33809
I0224 12:17:17.749959  904208 main.go:141] libmachine: () Calling .GetVersion
I0224 12:17:17.750577  904208 main.go:141] libmachine: Using API Version  1
I0224 12:17:17.750599  904208 main.go:141] libmachine: () Calling .SetConfigRaw
I0224 12:17:17.751002  904208 main.go:141] libmachine: () Calling .GetMachineName
I0224 12:17:17.751228  904208 main.go:141] libmachine: (functional-892991) Calling .GetState
I0224 12:17:17.753234  904208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0224 12:17:17.753278  904208 main.go:141] libmachine: Launching plugin server for driver kvm2
I0224 12:17:17.769905  904208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40773
I0224 12:17:17.770392  904208 main.go:141] libmachine: () Calling .GetVersion
I0224 12:17:17.770926  904208 main.go:141] libmachine: Using API Version  1
I0224 12:17:17.770953  904208 main.go:141] libmachine: () Calling .SetConfigRaw
I0224 12:17:17.771292  904208 main.go:141] libmachine: () Calling .GetMachineName
I0224 12:17:17.771499  904208 main.go:141] libmachine: (functional-892991) Calling .DriverName
I0224 12:17:17.771711  904208 ssh_runner.go:195] Run: systemctl --version
I0224 12:17:17.771736  904208 main.go:141] libmachine: (functional-892991) Calling .GetSSHHostname
I0224 12:17:17.774480  904208 main.go:141] libmachine: (functional-892991) DBG | domain functional-892991 has defined MAC address 52:54:00:b4:04:c3 in network mk-functional-892991
I0224 12:17:17.774929  904208 main.go:141] libmachine: (functional-892991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:04:c3", ip: ""} in network mk-functional-892991: {Iface:virbr1 ExpiryTime:2025-02-24 13:10:10 +0000 UTC Type:0 Mac:52:54:00:b4:04:c3 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:functional-892991 Clientid:01:52:54:00:b4:04:c3}
I0224 12:17:17.774963  904208 main.go:141] libmachine: (functional-892991) DBG | domain functional-892991 has defined IP address 192.168.39.143 and MAC address 52:54:00:b4:04:c3 in network mk-functional-892991
I0224 12:17:17.775098  904208 main.go:141] libmachine: (functional-892991) Calling .GetSSHPort
I0224 12:17:17.775304  904208 main.go:141] libmachine: (functional-892991) Calling .GetSSHKeyPath
I0224 12:17:17.775477  904208 main.go:141] libmachine: (functional-892991) Calling .GetSSHUsername
I0224 12:17:17.775630  904208 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/functional-892991/id_rsa Username:docker}
I0224 12:17:17.873466  904208 ssh_runner.go:195] Run: sudo crictl images --output json
I0224 12:17:17.952185  904208 main.go:141] libmachine: Making call to close driver server
I0224 12:17:17.952207  904208 main.go:141] libmachine: (functional-892991) Calling .Close
I0224 12:17:17.952519  904208 main.go:141] libmachine: Successfully made call to close driver server
I0224 12:17:17.952581  904208 main.go:141] libmachine: Making call to close connection to plugin binary
I0224 12:17:17.952603  904208 main.go:141] libmachine: Making call to close driver server
I0224 12:17:17.952617  904208 main.go:141] libmachine: (functional-892991) Calling .Close
I0224 12:17:17.952542  904208 main.go:141] libmachine: (functional-892991) DBG | Closing plugin on server side
I0224 12:17:17.952902  904208 main.go:141] libmachine: (functional-892991) DBG | Closing plugin on server side
I0224 12:17:17.952903  904208 main.go:141] libmachine: Successfully made call to close driver server
I0224 12:17:17.952937  904208 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (7.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-892991 ssh pgrep buildkitd: exit status 1 (203.801202ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 image build -t localhost/my-image:functional-892991 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-amd64 -p functional-892991 image build -t localhost/my-image:functional-892991 testdata/build --alsologtostderr: (7.302187581s)
functional_test.go:337: (dbg) Stdout: out/minikube-linux-amd64 -p functional-892991 image build -t localhost/my-image:functional-892991 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 3208ac86661
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-892991
--> 7a3dd637725
Successfully tagged localhost/my-image:functional-892991
7a3dd6377250ae8b8571a62e50be71465ffbfe0529cb8537c3681f34a130ec76
functional_test.go:340: (dbg) Stderr: out/minikube-linux-amd64 -p functional-892991 image build -t localhost/my-image:functional-892991 testdata/build --alsologtostderr:
I0224 12:17:18.211600  904262 out.go:345] Setting OutFile to fd 1 ...
I0224 12:17:18.211733  904262 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0224 12:17:18.211743  904262 out.go:358] Setting ErrFile to fd 2...
I0224 12:17:18.211748  904262 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0224 12:17:18.211962  904262 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-887294/.minikube/bin
I0224 12:17:18.212646  904262 config.go:182] Loaded profile config "functional-892991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0224 12:17:18.213364  904262 config.go:182] Loaded profile config "functional-892991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0224 12:17:18.213771  904262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0224 12:17:18.213822  904262 main.go:141] libmachine: Launching plugin server for driver kvm2
I0224 12:17:18.231135  904262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33145
I0224 12:17:18.231707  904262 main.go:141] libmachine: () Calling .GetVersion
I0224 12:17:18.232309  904262 main.go:141] libmachine: Using API Version  1
I0224 12:17:18.232342  904262 main.go:141] libmachine: () Calling .SetConfigRaw
I0224 12:17:18.232687  904262 main.go:141] libmachine: () Calling .GetMachineName
I0224 12:17:18.232921  904262 main.go:141] libmachine: (functional-892991) Calling .GetState
I0224 12:17:18.235006  904262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0224 12:17:18.235057  904262 main.go:141] libmachine: Launching plugin server for driver kvm2
I0224 12:17:18.251975  904262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37247
I0224 12:17:18.252484  904262 main.go:141] libmachine: () Calling .GetVersion
I0224 12:17:18.253085  904262 main.go:141] libmachine: Using API Version  1
I0224 12:17:18.253112  904262 main.go:141] libmachine: () Calling .SetConfigRaw
I0224 12:17:18.253675  904262 main.go:141] libmachine: () Calling .GetMachineName
I0224 12:17:18.253892  904262 main.go:141] libmachine: (functional-892991) Calling .DriverName
I0224 12:17:18.254095  904262 ssh_runner.go:195] Run: systemctl --version
I0224 12:17:18.254120  904262 main.go:141] libmachine: (functional-892991) Calling .GetSSHHostname
I0224 12:17:18.257299  904262 main.go:141] libmachine: (functional-892991) DBG | domain functional-892991 has defined MAC address 52:54:00:b4:04:c3 in network mk-functional-892991
I0224 12:17:18.257790  904262 main.go:141] libmachine: (functional-892991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:04:c3", ip: ""} in network mk-functional-892991: {Iface:virbr1 ExpiryTime:2025-02-24 13:10:10 +0000 UTC Type:0 Mac:52:54:00:b4:04:c3 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:functional-892991 Clientid:01:52:54:00:b4:04:c3}
I0224 12:17:18.257865  904262 main.go:141] libmachine: (functional-892991) DBG | domain functional-892991 has defined IP address 192.168.39.143 and MAC address 52:54:00:b4:04:c3 in network mk-functional-892991
I0224 12:17:18.257896  904262 main.go:141] libmachine: (functional-892991) Calling .GetSSHPort
I0224 12:17:18.258148  904262 main.go:141] libmachine: (functional-892991) Calling .GetSSHKeyPath
I0224 12:17:18.258381  904262 main.go:141] libmachine: (functional-892991) Calling .GetSSHUsername
I0224 12:17:18.258599  904262 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/functional-892991/id_rsa Username:docker}
I0224 12:17:18.341426  904262 build_images.go:161] Building image from path: /tmp/build.2483115595.tar
I0224 12:17:18.341538  904262 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0224 12:17:18.354608  904262 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2483115595.tar
I0224 12:17:18.359946  904262 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2483115595.tar: stat -c "%s %y" /var/lib/minikube/build/build.2483115595.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2483115595.tar': No such file or directory
I0224 12:17:18.360001  904262 ssh_runner.go:362] scp /tmp/build.2483115595.tar --> /var/lib/minikube/build/build.2483115595.tar (3072 bytes)
I0224 12:17:18.389274  904262 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2483115595
I0224 12:17:18.401764  904262 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2483115595 -xf /var/lib/minikube/build/build.2483115595.tar
I0224 12:17:18.412959  904262 crio.go:315] Building image: /var/lib/minikube/build/build.2483115595
I0224 12:17:18.413062  904262 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-892991 /var/lib/minikube/build/build.2483115595 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0224 12:17:25.436391  904262 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-892991 /var/lib/minikube/build/build.2483115595 --cgroup-manager=cgroupfs: (7.023294054s)
I0224 12:17:25.436493  904262 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2483115595
I0224 12:17:25.447512  904262 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2483115595.tar
I0224 12:17:25.458238  904262 build_images.go:217] Built localhost/my-image:functional-892991 from /tmp/build.2483115595.tar
I0224 12:17:25.458287  904262 build_images.go:133] succeeded building to: functional-892991
I0224 12:17:25.458295  904262 build_images.go:134] failed building to: 
I0224 12:17:25.458327  904262 main.go:141] libmachine: Making call to close driver server
I0224 12:17:25.458343  904262 main.go:141] libmachine: (functional-892991) Calling .Close
I0224 12:17:25.458783  904262 main.go:141] libmachine: (functional-892991) DBG | Closing plugin on server side
I0224 12:17:25.458799  904262 main.go:141] libmachine: Successfully made call to close driver server
I0224 12:17:25.458813  904262 main.go:141] libmachine: Making call to close connection to plugin binary
I0224 12:17:25.458828  904262 main.go:141] libmachine: Making call to close driver server
I0224 12:17:25.458835  904262 main.go:141] libmachine: (functional-892991) Calling .Close
I0224 12:17:25.459055  904262 main.go:141] libmachine: (functional-892991) DBG | Closing plugin on server side
I0224 12:17:25.459097  904262 main.go:141] libmachine: Successfully made call to close driver server
I0224 12:17:25.459113  904262 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 image ls
2025/02/24 12:17:28 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (7.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:359: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.922078948s)
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-892991
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-892991 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-892991 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-v59s8" [5b1fea8a-e7f8-434b-8c09-d65b909d85f7] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-v59s8" [5b1fea8a-e7f8-434b-8c09-d65b909d85f7] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.002978423s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 image load --daemon kicbase/echo-server:functional-892991 --alsologtostderr
functional_test.go:372: (dbg) Done: out/minikube-linux-amd64 -p functional-892991 image load --daemon kicbase/echo-server:functional-892991 --alsologtostderr: (2.682401499s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 image load --daemon kicbase/echo-server:functional-892991 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-892991
functional_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 image load --daemon kicbase/echo-server:functional-892991 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 image save kicbase/echo-server:functional-892991 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 image rm kicbase/echo-server:functional-892991 --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-linux-amd64 -p functional-892991 image rm kicbase/echo-server:functional-892991 --alsologtostderr: (1.870079032s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: (dbg) Done: out/minikube-linux-amd64 -p functional-892991 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (2.76293926s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 service list -o json
functional_test.go:1511: Took "382.104291ms" to run "out/minikube-linux-amd64 -p functional-892991 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.39.143:32745
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.39.143:32745
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (4.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-892991
functional_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 image save --daemon kicbase/echo-server:functional-892991 --alsologtostderr
functional_test.go:441: (dbg) Done: out/minikube-linux-amd64 -p functional-892991 image save --daemon kicbase/echo-server:functional-892991 --alsologtostderr: (4.360905093s)
functional_test.go:449: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-892991
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (4.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1332: Took "479.595989ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1346: Took "65.168646ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1383: Took "419.718204ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1396: Took "59.270622ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-892991 /tmp/TestFunctionalparallelMountCmdany-port2539088541/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1740399428637492787" to /tmp/TestFunctionalparallelMountCmdany-port2539088541/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1740399428637492787" to /tmp/TestFunctionalparallelMountCmdany-port2539088541/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1740399428637492787" to /tmp/TestFunctionalparallelMountCmdany-port2539088541/001/test-1740399428637492787
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-892991 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (276.52475ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0224 12:17:08.914301  894564 retry.go:31] will retry after 372.139204ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 24 12:17 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 24 12:17 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 24 12:17 test-1740399428637492787
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 ssh cat /mount-9p/test-1740399428637492787
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-892991 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [e7b8ed20-1959-4e38-8bee-4394d3adb279] Pending
helpers_test.go:344: "busybox-mount" [e7b8ed20-1959-4e38-8bee-4394d3adb279] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [e7b8ed20-1959-4e38-8bee-4394d3adb279] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [e7b8ed20-1959-4e38-8bee-4394d3adb279] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 9.003827192s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-892991 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-892991 /tmp/TestFunctionalparallelMountCmdany-port2539088541/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.71s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-892991 /tmp/TestFunctionalparallelMountCmdspecific-port1869167773/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-892991 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (320.043827ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0224 12:17:20.663226  894564 retry.go:31] will retry after 387.994763ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-892991 /tmp/TestFunctionalparallelMountCmdspecific-port1869167773/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-892991 ssh "sudo umount -f /mount-9p": exit status 1 (251.689847ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-892991 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-892991 /tmp/TestFunctionalparallelMountCmdspecific-port1869167773/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-892991 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1240357450/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-892991 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1240357450/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-892991 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1240357450/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-892991 ssh "findmnt -T" /mount1: exit status 1 (312.361659ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0224 12:17:22.558112  894564 retry.go:31] will retry after 568.396768ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-892991 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-892991 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-892991 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1240357450/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-892991 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1240357450/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-892991 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1240357450/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.75s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-892991
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-892991
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-892991
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (200.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-911088 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0224 12:19:12.769476  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-911088 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m19.789644536s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (200.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-911088 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-911088 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-911088 -- rollout status deployment/busybox: (4.912445838s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-911088 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-911088 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-911088 -- exec busybox-58667487b6-2c2gr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-911088 -- exec busybox-58667487b6-2r4v6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-911088 -- exec busybox-58667487b6-pf45f -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-911088 -- exec busybox-58667487b6-2c2gr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-911088 -- exec busybox-58667487b6-2r4v6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-911088 -- exec busybox-58667487b6-pf45f -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-911088 -- exec busybox-58667487b6-2c2gr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-911088 -- exec busybox-58667487b6-2r4v6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-911088 -- exec busybox-58667487b6-pf45f -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-911088 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-911088 -- exec busybox-58667487b6-2c2gr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-911088 -- exec busybox-58667487b6-2c2gr -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-911088 -- exec busybox-58667487b6-2r4v6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-911088 -- exec busybox-58667487b6-2r4v6 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-911088 -- exec busybox-58667487b6-pf45f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-911088 -- exec busybox-58667487b6-pf45f -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (60.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-911088 -v=7 --alsologtostderr
E0224 12:21:46.849286  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/functional-892991/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:21:46.855840  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/functional-892991/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:21:46.867391  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/functional-892991/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:21:46.888919  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/functional-892991/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:21:46.930414  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/functional-892991/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:21:47.011911  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/functional-892991/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:21:47.173505  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/functional-892991/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:21:47.495449  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/functional-892991/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:21:48.137146  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/functional-892991/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:21:49.418937  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/functional-892991/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:21:51.981345  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/functional-892991/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:21:57.102808  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/functional-892991/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-911088 -v=7 --alsologtostderr: (59.137165466s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (60.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-911088 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0224 12:22:07.344551  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/functional-892991/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 cp testdata/cp-test.txt ha-911088:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 ssh -n ha-911088 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 cp ha-911088:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1009953185/001/cp-test_ha-911088.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 ssh -n ha-911088 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 cp ha-911088:/home/docker/cp-test.txt ha-911088-m02:/home/docker/cp-test_ha-911088_ha-911088-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 ssh -n ha-911088 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 ssh -n ha-911088-m02 "sudo cat /home/docker/cp-test_ha-911088_ha-911088-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 cp ha-911088:/home/docker/cp-test.txt ha-911088-m03:/home/docker/cp-test_ha-911088_ha-911088-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 ssh -n ha-911088 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 ssh -n ha-911088-m03 "sudo cat /home/docker/cp-test_ha-911088_ha-911088-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 cp ha-911088:/home/docker/cp-test.txt ha-911088-m04:/home/docker/cp-test_ha-911088_ha-911088-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 ssh -n ha-911088 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 ssh -n ha-911088-m04 "sudo cat /home/docker/cp-test_ha-911088_ha-911088-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 cp testdata/cp-test.txt ha-911088-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 ssh -n ha-911088-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 cp ha-911088-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1009953185/001/cp-test_ha-911088-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 ssh -n ha-911088-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 cp ha-911088-m02:/home/docker/cp-test.txt ha-911088:/home/docker/cp-test_ha-911088-m02_ha-911088.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 ssh -n ha-911088-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 ssh -n ha-911088 "sudo cat /home/docker/cp-test_ha-911088-m02_ha-911088.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 cp ha-911088-m02:/home/docker/cp-test.txt ha-911088-m03:/home/docker/cp-test_ha-911088-m02_ha-911088-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 ssh -n ha-911088-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 ssh -n ha-911088-m03 "sudo cat /home/docker/cp-test_ha-911088-m02_ha-911088-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 cp ha-911088-m02:/home/docker/cp-test.txt ha-911088-m04:/home/docker/cp-test_ha-911088-m02_ha-911088-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 ssh -n ha-911088-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 ssh -n ha-911088-m04 "sudo cat /home/docker/cp-test_ha-911088-m02_ha-911088-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 cp testdata/cp-test.txt ha-911088-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 ssh -n ha-911088-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 cp ha-911088-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1009953185/001/cp-test_ha-911088-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 ssh -n ha-911088-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 cp ha-911088-m03:/home/docker/cp-test.txt ha-911088:/home/docker/cp-test_ha-911088-m03_ha-911088.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 ssh -n ha-911088-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 ssh -n ha-911088 "sudo cat /home/docker/cp-test_ha-911088-m03_ha-911088.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 cp ha-911088-m03:/home/docker/cp-test.txt ha-911088-m02:/home/docker/cp-test_ha-911088-m03_ha-911088-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 ssh -n ha-911088-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 ssh -n ha-911088-m02 "sudo cat /home/docker/cp-test_ha-911088-m03_ha-911088-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 cp ha-911088-m03:/home/docker/cp-test.txt ha-911088-m04:/home/docker/cp-test_ha-911088-m03_ha-911088-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 ssh -n ha-911088-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 ssh -n ha-911088-m04 "sudo cat /home/docker/cp-test_ha-911088-m03_ha-911088-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 cp testdata/cp-test.txt ha-911088-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 ssh -n ha-911088-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 cp ha-911088-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1009953185/001/cp-test_ha-911088-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 ssh -n ha-911088-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 cp ha-911088-m04:/home/docker/cp-test.txt ha-911088:/home/docker/cp-test_ha-911088-m04_ha-911088.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 ssh -n ha-911088-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 ssh -n ha-911088 "sudo cat /home/docker/cp-test_ha-911088-m04_ha-911088.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 cp ha-911088-m04:/home/docker/cp-test.txt ha-911088-m02:/home/docker/cp-test_ha-911088-m04_ha-911088-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 ssh -n ha-911088-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 ssh -n ha-911088-m02 "sudo cat /home/docker/cp-test_ha-911088-m04_ha-911088-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 cp ha-911088-m04:/home/docker/cp-test.txt ha-911088-m03:/home/docker/cp-test_ha-911088-m04_ha-911088-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 ssh -n ha-911088-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 ssh -n ha-911088-m03 "sudo cat /home/docker/cp-test_ha-911088-m04_ha-911088-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 node stop m02 -v=7 --alsologtostderr
E0224 12:22:27.826498  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/functional-892991/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:23:08.787978  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/functional-892991/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-911088 node stop m02 -v=7 --alsologtostderr: (1m30.822482037s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-911088 status -v=7 --alsologtostderr: exit status 7 (702.314927ms)

                                                
                                                
-- stdout --
	ha-911088
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-911088-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-911088-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-911088-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0224 12:23:52.098511  909953 out.go:345] Setting OutFile to fd 1 ...
	I0224 12:23:52.098643  909953 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 12:23:52.098651  909953 out.go:358] Setting ErrFile to fd 2...
	I0224 12:23:52.098655  909953 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 12:23:52.098895  909953 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-887294/.minikube/bin
	I0224 12:23:52.099085  909953 out.go:352] Setting JSON to false
	I0224 12:23:52.099113  909953 mustload.go:65] Loading cluster: ha-911088
	I0224 12:23:52.099196  909953 notify.go:220] Checking for updates...
	I0224 12:23:52.099595  909953 config.go:182] Loaded profile config "ha-911088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0224 12:23:52.099621  909953 status.go:174] checking status of ha-911088 ...
	I0224 12:23:52.100182  909953 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:23:52.100233  909953 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:23:52.119391  909953 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32951
	I0224 12:23:52.120009  909953 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:23:52.120693  909953 main.go:141] libmachine: Using API Version  1
	I0224 12:23:52.120729  909953 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:23:52.121157  909953 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:23:52.121390  909953 main.go:141] libmachine: (ha-911088) Calling .GetState
	I0224 12:23:52.123047  909953 status.go:371] ha-911088 host status = "Running" (err=<nil>)
	I0224 12:23:52.123075  909953 host.go:66] Checking if "ha-911088" exists ...
	I0224 12:23:52.123445  909953 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:23:52.123498  909953 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:23:52.139511  909953 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34387
	I0224 12:23:52.140012  909953 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:23:52.140572  909953 main.go:141] libmachine: Using API Version  1
	I0224 12:23:52.140596  909953 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:23:52.140915  909953 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:23:52.141121  909953 main.go:141] libmachine: (ha-911088) Calling .GetIP
	I0224 12:23:52.144317  909953 main.go:141] libmachine: (ha-911088) DBG | domain ha-911088 has defined MAC address 52:54:00:61:b7:87 in network mk-ha-911088
	I0224 12:23:52.144798  909953 main.go:141] libmachine: (ha-911088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:b7:87", ip: ""} in network mk-ha-911088: {Iface:virbr1 ExpiryTime:2025-02-24 13:17:53 +0000 UTC Type:0 Mac:52:54:00:61:b7:87 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-911088 Clientid:01:52:54:00:61:b7:87}
	I0224 12:23:52.144824  909953 main.go:141] libmachine: (ha-911088) DBG | domain ha-911088 has defined IP address 192.168.39.49 and MAC address 52:54:00:61:b7:87 in network mk-ha-911088
	I0224 12:23:52.145062  909953 host.go:66] Checking if "ha-911088" exists ...
	I0224 12:23:52.145475  909953 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:23:52.145526  909953 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:23:52.161984  909953 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35597
	I0224 12:23:52.162521  909953 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:23:52.163054  909953 main.go:141] libmachine: Using API Version  1
	I0224 12:23:52.163075  909953 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:23:52.163464  909953 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:23:52.163669  909953 main.go:141] libmachine: (ha-911088) Calling .DriverName
	I0224 12:23:52.163905  909953 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0224 12:23:52.163945  909953 main.go:141] libmachine: (ha-911088) Calling .GetSSHHostname
	I0224 12:23:52.166822  909953 main.go:141] libmachine: (ha-911088) DBG | domain ha-911088 has defined MAC address 52:54:00:61:b7:87 in network mk-ha-911088
	I0224 12:23:52.167403  909953 main.go:141] libmachine: (ha-911088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:b7:87", ip: ""} in network mk-ha-911088: {Iface:virbr1 ExpiryTime:2025-02-24 13:17:53 +0000 UTC Type:0 Mac:52:54:00:61:b7:87 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-911088 Clientid:01:52:54:00:61:b7:87}
	I0224 12:23:52.167438  909953 main.go:141] libmachine: (ha-911088) DBG | domain ha-911088 has defined IP address 192.168.39.49 and MAC address 52:54:00:61:b7:87 in network mk-ha-911088
	I0224 12:23:52.167604  909953 main.go:141] libmachine: (ha-911088) Calling .GetSSHPort
	I0224 12:23:52.167779  909953 main.go:141] libmachine: (ha-911088) Calling .GetSSHKeyPath
	I0224 12:23:52.167963  909953 main.go:141] libmachine: (ha-911088) Calling .GetSSHUsername
	I0224 12:23:52.168148  909953 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/ha-911088/id_rsa Username:docker}
	I0224 12:23:52.259386  909953 ssh_runner.go:195] Run: systemctl --version
	I0224 12:23:52.267008  909953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 12:23:52.289894  909953 kubeconfig.go:125] found "ha-911088" server: "https://192.168.39.254:8443"
	I0224 12:23:52.289936  909953 api_server.go:166] Checking apiserver status ...
	I0224 12:23:52.289986  909953 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 12:23:52.311831  909953 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1159/cgroup
	W0224 12:23:52.327322  909953 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1159/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0224 12:23:52.327404  909953 ssh_runner.go:195] Run: ls
	I0224 12:23:52.333752  909953 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0224 12:23:52.339267  909953 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0224 12:23:52.339306  909953 status.go:463] ha-911088 apiserver status = Running (err=<nil>)
	I0224 12:23:52.339318  909953 status.go:176] ha-911088 status: &{Name:ha-911088 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0224 12:23:52.339337  909953 status.go:174] checking status of ha-911088-m02 ...
	I0224 12:23:52.339655  909953 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:23:52.339700  909953 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:23:52.354847  909953 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38741
	I0224 12:23:52.355422  909953 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:23:52.355945  909953 main.go:141] libmachine: Using API Version  1
	I0224 12:23:52.355968  909953 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:23:52.356319  909953 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:23:52.356525  909953 main.go:141] libmachine: (ha-911088-m02) Calling .GetState
	I0224 12:23:52.358232  909953 status.go:371] ha-911088-m02 host status = "Stopped" (err=<nil>)
	I0224 12:23:52.358244  909953 status.go:384] host is not running, skipping remaining checks
	I0224 12:23:52.358250  909953 status.go:176] ha-911088-m02 status: &{Name:ha-911088-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0224 12:23:52.358269  909953 status.go:174] checking status of ha-911088-m03 ...
	I0224 12:23:52.358579  909953 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:23:52.358620  909953 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:23:52.374991  909953 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37675
	I0224 12:23:52.375513  909953 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:23:52.376073  909953 main.go:141] libmachine: Using API Version  1
	I0224 12:23:52.376096  909953 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:23:52.376495  909953 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:23:52.376719  909953 main.go:141] libmachine: (ha-911088-m03) Calling .GetState
	I0224 12:23:52.378358  909953 status.go:371] ha-911088-m03 host status = "Running" (err=<nil>)
	I0224 12:23:52.378379  909953 host.go:66] Checking if "ha-911088-m03" exists ...
	I0224 12:23:52.378732  909953 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:23:52.378789  909953 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:23:52.395210  909953 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36647
	I0224 12:23:52.395743  909953 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:23:52.396348  909953 main.go:141] libmachine: Using API Version  1
	I0224 12:23:52.396379  909953 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:23:52.396753  909953 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:23:52.396938  909953 main.go:141] libmachine: (ha-911088-m03) Calling .GetIP
	I0224 12:23:52.400198  909953 main.go:141] libmachine: (ha-911088-m03) DBG | domain ha-911088-m03 has defined MAC address 52:54:00:1d:02:53 in network mk-ha-911088
	I0224 12:23:52.400697  909953 main.go:141] libmachine: (ha-911088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:02:53", ip: ""} in network mk-ha-911088: {Iface:virbr1 ExpiryTime:2025-02-24 13:19:55 +0000 UTC Type:0 Mac:52:54:00:1d:02:53 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-911088-m03 Clientid:01:52:54:00:1d:02:53}
	I0224 12:23:52.400723  909953 main.go:141] libmachine: (ha-911088-m03) DBG | domain ha-911088-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:1d:02:53 in network mk-ha-911088
	I0224 12:23:52.400883  909953 host.go:66] Checking if "ha-911088-m03" exists ...
	I0224 12:23:52.401205  909953 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:23:52.401248  909953 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:23:52.417547  909953 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34227
	I0224 12:23:52.418141  909953 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:23:52.418663  909953 main.go:141] libmachine: Using API Version  1
	I0224 12:23:52.418687  909953 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:23:52.418977  909953 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:23:52.419182  909953 main.go:141] libmachine: (ha-911088-m03) Calling .DriverName
	I0224 12:23:52.419378  909953 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0224 12:23:52.419405  909953 main.go:141] libmachine: (ha-911088-m03) Calling .GetSSHHostname
	I0224 12:23:52.422503  909953 main.go:141] libmachine: (ha-911088-m03) DBG | domain ha-911088-m03 has defined MAC address 52:54:00:1d:02:53 in network mk-ha-911088
	I0224 12:23:52.422979  909953 main.go:141] libmachine: (ha-911088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:02:53", ip: ""} in network mk-ha-911088: {Iface:virbr1 ExpiryTime:2025-02-24 13:19:55 +0000 UTC Type:0 Mac:52:54:00:1d:02:53 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-911088-m03 Clientid:01:52:54:00:1d:02:53}
	I0224 12:23:52.423007  909953 main.go:141] libmachine: (ha-911088-m03) DBG | domain ha-911088-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:1d:02:53 in network mk-ha-911088
	I0224 12:23:52.423180  909953 main.go:141] libmachine: (ha-911088-m03) Calling .GetSSHPort
	I0224 12:23:52.423318  909953 main.go:141] libmachine: (ha-911088-m03) Calling .GetSSHKeyPath
	I0224 12:23:52.423485  909953 main.go:141] libmachine: (ha-911088-m03) Calling .GetSSHUsername
	I0224 12:23:52.423670  909953 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/ha-911088-m03/id_rsa Username:docker}
	I0224 12:23:52.508625  909953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 12:23:52.530634  909953 kubeconfig.go:125] found "ha-911088" server: "https://192.168.39.254:8443"
	I0224 12:23:52.530671  909953 api_server.go:166] Checking apiserver status ...
	I0224 12:23:52.530707  909953 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 12:23:52.549842  909953 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1448/cgroup
	W0224 12:23:52.563477  909953 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1448/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0224 12:23:52.563539  909953 ssh_runner.go:195] Run: ls
	I0224 12:23:52.568399  909953 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0224 12:23:52.574849  909953 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0224 12:23:52.574891  909953 status.go:463] ha-911088-m03 apiserver status = Running (err=<nil>)
	I0224 12:23:52.574904  909953 status.go:176] ha-911088-m03 status: &{Name:ha-911088-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0224 12:23:52.574928  909953 status.go:174] checking status of ha-911088-m04 ...
	I0224 12:23:52.575274  909953 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:23:52.575316  909953 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:23:52.591594  909953 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39079
	I0224 12:23:52.592111  909953 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:23:52.592649  909953 main.go:141] libmachine: Using API Version  1
	I0224 12:23:52.592671  909953 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:23:52.592990  909953 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:23:52.593185  909953 main.go:141] libmachine: (ha-911088-m04) Calling .GetState
	I0224 12:23:52.594883  909953 status.go:371] ha-911088-m04 host status = "Running" (err=<nil>)
	I0224 12:23:52.594899  909953 host.go:66] Checking if "ha-911088-m04" exists ...
	I0224 12:23:52.595287  909953 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:23:52.595336  909953 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:23:52.610989  909953 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45439
	I0224 12:23:52.611478  909953 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:23:52.612061  909953 main.go:141] libmachine: Using API Version  1
	I0224 12:23:52.612089  909953 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:23:52.612480  909953 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:23:52.612701  909953 main.go:141] libmachine: (ha-911088-m04) Calling .GetIP
	I0224 12:23:52.615675  909953 main.go:141] libmachine: (ha-911088-m04) DBG | domain ha-911088-m04 has defined MAC address 52:54:00:3a:1d:d4 in network mk-ha-911088
	I0224 12:23:52.616124  909953 main.go:141] libmachine: (ha-911088-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:1d:d4", ip: ""} in network mk-ha-911088: {Iface:virbr1 ExpiryTime:2025-02-24 13:21:23 +0000 UTC Type:0 Mac:52:54:00:3a:1d:d4 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-911088-m04 Clientid:01:52:54:00:3a:1d:d4}
	I0224 12:23:52.616166  909953 main.go:141] libmachine: (ha-911088-m04) DBG | domain ha-911088-m04 has defined IP address 192.168.39.245 and MAC address 52:54:00:3a:1d:d4 in network mk-ha-911088
	I0224 12:23:52.616348  909953 host.go:66] Checking if "ha-911088-m04" exists ...
	I0224 12:23:52.616658  909953 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:23:52.616700  909953 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:23:52.633768  909953 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39775
	I0224 12:23:52.634391  909953 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:23:52.634930  909953 main.go:141] libmachine: Using API Version  1
	I0224 12:23:52.634953  909953 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:23:52.635304  909953 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:23:52.635547  909953 main.go:141] libmachine: (ha-911088-m04) Calling .DriverName
	I0224 12:23:52.635740  909953 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0224 12:23:52.635770  909953 main.go:141] libmachine: (ha-911088-m04) Calling .GetSSHHostname
	I0224 12:23:52.639086  909953 main.go:141] libmachine: (ha-911088-m04) DBG | domain ha-911088-m04 has defined MAC address 52:54:00:3a:1d:d4 in network mk-ha-911088
	I0224 12:23:52.639696  909953 main.go:141] libmachine: (ha-911088-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:1d:d4", ip: ""} in network mk-ha-911088: {Iface:virbr1 ExpiryTime:2025-02-24 13:21:23 +0000 UTC Type:0 Mac:52:54:00:3a:1d:d4 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-911088-m04 Clientid:01:52:54:00:3a:1d:d4}
	I0224 12:23:52.639734  909953 main.go:141] libmachine: (ha-911088-m04) DBG | domain ha-911088-m04 has defined IP address 192.168.39.245 and MAC address 52:54:00:3a:1d:d4 in network mk-ha-911088
	I0224 12:23:52.639883  909953 main.go:141] libmachine: (ha-911088-m04) Calling .GetSSHPort
	I0224 12:23:52.640054  909953 main.go:141] libmachine: (ha-911088-m04) Calling .GetSSHKeyPath
	I0224 12:23:52.640229  909953 main.go:141] libmachine: (ha-911088-m04) Calling .GetSSHUsername
	I0224 12:23:52.640355  909953 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/ha-911088-m04/id_rsa Username:docker}
	I0224 12:23:52.726541  909953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 12:23:52.744857  909953 status.go:176] ha-911088-m04 status: &{Name:ha-911088-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (55.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 node start m02 -v=7 --alsologtostderr
E0224 12:24:12.769821  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:24:30.710168  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/functional-892991/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-911088 node start m02 -v=7 --alsologtostderr: (54.848430179s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (55.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (443.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-911088 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-911088 -v=7 --alsologtostderr
E0224 12:25:35.851087  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:26:46.849422  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/functional-892991/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:27:14.551801  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/functional-892991/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:29:12.769661  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-911088 -v=7 --alsologtostderr: (4m33.987130135s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-911088 --wait=true -v=7 --alsologtostderr
E0224 12:31:46.849993  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/functional-892991/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-911088 --wait=true -v=7 --alsologtostderr: (2m49.384872063s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-911088
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (443.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-911088 node delete m03 -v=7 --alsologtostderr: (17.776199477s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (272.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 stop -v=7 --alsologtostderr
E0224 12:34:12.769522  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:36:46.849216  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/functional-892991/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-911088 stop -v=7 --alsologtostderr: (4m32.657699857s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-911088 status -v=7 --alsologtostderr: exit status 7 (116.261957ms)

                                                
                                                
-- stdout --
	ha-911088
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-911088-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-911088-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0224 12:37:05.573101  914204 out.go:345] Setting OutFile to fd 1 ...
	I0224 12:37:05.573457  914204 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 12:37:05.573474  914204 out.go:358] Setting ErrFile to fd 2...
	I0224 12:37:05.573479  914204 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 12:37:05.573677  914204 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-887294/.minikube/bin
	I0224 12:37:05.573865  914204 out.go:352] Setting JSON to false
	I0224 12:37:05.573893  914204 mustload.go:65] Loading cluster: ha-911088
	I0224 12:37:05.573989  914204 notify.go:220] Checking for updates...
	I0224 12:37:05.574299  914204 config.go:182] Loaded profile config "ha-911088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0224 12:37:05.574326  914204 status.go:174] checking status of ha-911088 ...
	I0224 12:37:05.574800  914204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:37:05.574844  914204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:37:05.596599  914204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46141
	I0224 12:37:05.597040  914204 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:37:05.597711  914204 main.go:141] libmachine: Using API Version  1
	I0224 12:37:05.597734  914204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:37:05.598138  914204 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:37:05.598429  914204 main.go:141] libmachine: (ha-911088) Calling .GetState
	I0224 12:37:05.600079  914204 status.go:371] ha-911088 host status = "Stopped" (err=<nil>)
	I0224 12:37:05.600096  914204 status.go:384] host is not running, skipping remaining checks
	I0224 12:37:05.600102  914204 status.go:176] ha-911088 status: &{Name:ha-911088 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0224 12:37:05.600125  914204 status.go:174] checking status of ha-911088-m02 ...
	I0224 12:37:05.600422  914204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:37:05.600493  914204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:37:05.615798  914204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35623
	I0224 12:37:05.616325  914204 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:37:05.616827  914204 main.go:141] libmachine: Using API Version  1
	I0224 12:37:05.616853  914204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:37:05.617230  914204 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:37:05.617483  914204 main.go:141] libmachine: (ha-911088-m02) Calling .GetState
	I0224 12:37:05.619309  914204 status.go:371] ha-911088-m02 host status = "Stopped" (err=<nil>)
	I0224 12:37:05.619328  914204 status.go:384] host is not running, skipping remaining checks
	I0224 12:37:05.619334  914204 status.go:176] ha-911088-m02 status: &{Name:ha-911088-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0224 12:37:05.619353  914204 status.go:174] checking status of ha-911088-m04 ...
	I0224 12:37:05.619672  914204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:37:05.619714  914204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:37:05.634821  914204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33885
	I0224 12:37:05.635237  914204 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:37:05.635767  914204 main.go:141] libmachine: Using API Version  1
	I0224 12:37:05.635790  914204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:37:05.636072  914204 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:37:05.636313  914204 main.go:141] libmachine: (ha-911088-m04) Calling .GetState
	I0224 12:37:05.637764  914204 status.go:371] ha-911088-m04 host status = "Stopped" (err=<nil>)
	I0224 12:37:05.637779  914204 status.go:384] host is not running, skipping remaining checks
	I0224 12:37:05.637786  914204 status.go:176] ha-911088-m04 status: &{Name:ha-911088-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (272.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (124.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-911088 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0224 12:38:09.916392  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/functional-892991/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-911088 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m3.826231531s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (124.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (79.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-911088 --control-plane -v=7 --alsologtostderr
E0224 12:39:12.769370  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-911088 --control-plane -v=7 --alsologtostderr: (1m18.545543733s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-911088 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (79.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.93s)

                                                
                                    
x
+
TestJSONOutput/start/Command (60.41s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-136810 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-136810 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m0.411417389s)
--- PASS: TestJSONOutput/start/Command (60.41s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-136810 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-136810 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.41s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-136810 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-136810 --output=json --user=testUser: (7.408662099s)
--- PASS: TestJSONOutput/stop/Command (7.41s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-466446 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-466446 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (72.070063ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"947a4239-cc21-428c-97f9-5648bb04de0f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-466446] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"aaf12873-ce84-46ea-a426-b18663399f1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20451"}}
	{"specversion":"1.0","id":"9116c630-4d3a-4676-bee0-9cd73500382b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"05ba8f8e-a9d1-47c7-8f6a-d2ab26b9b444","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20451-887294/kubeconfig"}}
	{"specversion":"1.0","id":"eb6257f3-4b54-4ead-bf19-77a54012c042","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-887294/.minikube"}}
	{"specversion":"1.0","id":"2a448bda-d1ca-46f9-a603-19bc656d6f73","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"32abf846-4001-4c04-8cb9-8e0301375eef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0b0d8825-e7ca-4a6e-b697-150a0af3a600","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-466446" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-466446
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (89.67s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-108025 --driver=kvm2  --container-runtime=crio
E0224 12:41:46.849937  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/functional-892991/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:42:15.854712  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-108025 --driver=kvm2  --container-runtime=crio: (42.259227304s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-122943 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-122943 --driver=kvm2  --container-runtime=crio: (44.425776548s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-108025
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-122943
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-122943" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-122943
helpers_test.go:175: Cleaning up "first-108025" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-108025
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-108025: (1.015241729s)
--- PASS: TestMinikubeProfile (89.67s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (25.87s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-239804 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-239804 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (24.868946045s)
--- PASS: TestMountStart/serial/StartWithMountFirst (25.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-239804 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-239804 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.53s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-254680 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-254680 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.532411139s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.53s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-254680 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-254680 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.91s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-239804 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-254680 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-254680 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-254680
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-254680: (1.335706155s)
--- PASS: TestMountStart/serial/Stop (1.34s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.16s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-254680
E0224 12:44:12.769233  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-254680: (22.158348719s)
--- PASS: TestMountStart/serial/RestartStopped (23.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-254680 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-254680 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (119.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-397129 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-397129 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m59.542155936s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397129 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (119.96s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-397129 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-397129 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-397129 -- rollout status deployment/busybox: (4.26729474s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-397129 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-397129 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-397129 -- exec busybox-58667487b6-8l5jh -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-397129 -- exec busybox-58667487b6-x2jjv -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-397129 -- exec busybox-58667487b6-8l5jh -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-397129 -- exec busybox-58667487b6-x2jjv -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-397129 -- exec busybox-58667487b6-8l5jh -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-397129 -- exec busybox-58667487b6-x2jjv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.81s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-397129 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-397129 -- exec busybox-58667487b6-8l5jh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-397129 -- exec busybox-58667487b6-8l5jh -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-397129 -- exec busybox-58667487b6-x2jjv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-397129 -- exec busybox-58667487b6-x2jjv -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (53.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-397129 -v 3 --alsologtostderr
E0224 12:46:46.848962  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/functional-892991/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-397129 -v 3 --alsologtostderr: (53.139972243s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397129 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (53.74s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-397129 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.60s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397129 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397129 cp testdata/cp-test.txt multinode-397129:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397129 ssh -n multinode-397129 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397129 cp multinode-397129:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile414138692/001/cp-test_multinode-397129.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397129 ssh -n multinode-397129 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397129 cp multinode-397129:/home/docker/cp-test.txt multinode-397129-m02:/home/docker/cp-test_multinode-397129_multinode-397129-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397129 ssh -n multinode-397129 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397129 ssh -n multinode-397129-m02 "sudo cat /home/docker/cp-test_multinode-397129_multinode-397129-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397129 cp multinode-397129:/home/docker/cp-test.txt multinode-397129-m03:/home/docker/cp-test_multinode-397129_multinode-397129-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397129 ssh -n multinode-397129 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397129 ssh -n multinode-397129-m03 "sudo cat /home/docker/cp-test_multinode-397129_multinode-397129-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397129 cp testdata/cp-test.txt multinode-397129-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397129 ssh -n multinode-397129-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397129 cp multinode-397129-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile414138692/001/cp-test_multinode-397129-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397129 ssh -n multinode-397129-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397129 cp multinode-397129-m02:/home/docker/cp-test.txt multinode-397129:/home/docker/cp-test_multinode-397129-m02_multinode-397129.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397129 ssh -n multinode-397129-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397129 ssh -n multinode-397129 "sudo cat /home/docker/cp-test_multinode-397129-m02_multinode-397129.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397129 cp multinode-397129-m02:/home/docker/cp-test.txt multinode-397129-m03:/home/docker/cp-test_multinode-397129-m02_multinode-397129-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397129 ssh -n multinode-397129-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397129 ssh -n multinode-397129-m03 "sudo cat /home/docker/cp-test_multinode-397129-m02_multinode-397129-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397129 cp testdata/cp-test.txt multinode-397129-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397129 ssh -n multinode-397129-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397129 cp multinode-397129-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile414138692/001/cp-test_multinode-397129-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397129 ssh -n multinode-397129-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397129 cp multinode-397129-m03:/home/docker/cp-test.txt multinode-397129:/home/docker/cp-test_multinode-397129-m03_multinode-397129.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397129 ssh -n multinode-397129-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397129 ssh -n multinode-397129 "sudo cat /home/docker/cp-test_multinode-397129-m03_multinode-397129.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397129 cp multinode-397129-m03:/home/docker/cp-test.txt multinode-397129-m02:/home/docker/cp-test_multinode-397129-m03_multinode-397129-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397129 ssh -n multinode-397129-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397129 ssh -n multinode-397129-m02 "sudo cat /home/docker/cp-test_multinode-397129-m03_multinode-397129-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.58s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397129 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-397129 node stop m03: (1.557469966s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397129 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-397129 status: exit status 7 (443.178517ms)

                                                
                                                
-- stdout --
	multinode-397129
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-397129-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-397129-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397129 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-397129 status --alsologtostderr: exit status 7 (445.409627ms)

                                                
                                                
-- stdout --
	multinode-397129
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-397129-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-397129-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0224 12:47:46.387177  921974 out.go:345] Setting OutFile to fd 1 ...
	I0224 12:47:46.387305  921974 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 12:47:46.387314  921974 out.go:358] Setting ErrFile to fd 2...
	I0224 12:47:46.387319  921974 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 12:47:46.387495  921974 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-887294/.minikube/bin
	I0224 12:47:46.387653  921974 out.go:352] Setting JSON to false
	I0224 12:47:46.387680  921974 mustload.go:65] Loading cluster: multinode-397129
	I0224 12:47:46.387749  921974 notify.go:220] Checking for updates...
	I0224 12:47:46.388086  921974 config.go:182] Loaded profile config "multinode-397129": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0224 12:47:46.388110  921974 status.go:174] checking status of multinode-397129 ...
	I0224 12:47:46.389125  921974 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:47:46.389191  921974 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:47:46.412556  921974 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35195
	I0224 12:47:46.413124  921974 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:47:46.413775  921974 main.go:141] libmachine: Using API Version  1
	I0224 12:47:46.413809  921974 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:47:46.414233  921974 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:47:46.414492  921974 main.go:141] libmachine: (multinode-397129) Calling .GetState
	I0224 12:47:46.416404  921974 status.go:371] multinode-397129 host status = "Running" (err=<nil>)
	I0224 12:47:46.416422  921974 host.go:66] Checking if "multinode-397129" exists ...
	I0224 12:47:46.416786  921974 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:47:46.416829  921974 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:47:46.433195  921974 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34587
	I0224 12:47:46.433668  921974 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:47:46.434228  921974 main.go:141] libmachine: Using API Version  1
	I0224 12:47:46.434250  921974 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:47:46.434591  921974 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:47:46.434822  921974 main.go:141] libmachine: (multinode-397129) Calling .GetIP
	I0224 12:47:46.437813  921974 main.go:141] libmachine: (multinode-397129) DBG | domain multinode-397129 has defined MAC address 52:54:00:e4:fe:4b in network mk-multinode-397129
	I0224 12:47:46.438306  921974 main.go:141] libmachine: (multinode-397129) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:fe:4b", ip: ""} in network mk-multinode-397129: {Iface:virbr1 ExpiryTime:2025-02-24 13:44:51 +0000 UTC Type:0 Mac:52:54:00:e4:fe:4b Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:multinode-397129 Clientid:01:52:54:00:e4:fe:4b}
	I0224 12:47:46.438347  921974 main.go:141] libmachine: (multinode-397129) DBG | domain multinode-397129 has defined IP address 192.168.39.117 and MAC address 52:54:00:e4:fe:4b in network mk-multinode-397129
	I0224 12:47:46.438436  921974 host.go:66] Checking if "multinode-397129" exists ...
	I0224 12:47:46.438743  921974 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:47:46.438782  921974 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:47:46.455079  921974 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35969
	I0224 12:47:46.455628  921974 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:47:46.456138  921974 main.go:141] libmachine: Using API Version  1
	I0224 12:47:46.456169  921974 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:47:46.456496  921974 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:47:46.456717  921974 main.go:141] libmachine: (multinode-397129) Calling .DriverName
	I0224 12:47:46.456910  921974 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0224 12:47:46.456935  921974 main.go:141] libmachine: (multinode-397129) Calling .GetSSHHostname
	I0224 12:47:46.459771  921974 main.go:141] libmachine: (multinode-397129) DBG | domain multinode-397129 has defined MAC address 52:54:00:e4:fe:4b in network mk-multinode-397129
	I0224 12:47:46.460308  921974 main.go:141] libmachine: (multinode-397129) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:fe:4b", ip: ""} in network mk-multinode-397129: {Iface:virbr1 ExpiryTime:2025-02-24 13:44:51 +0000 UTC Type:0 Mac:52:54:00:e4:fe:4b Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:multinode-397129 Clientid:01:52:54:00:e4:fe:4b}
	I0224 12:47:46.460339  921974 main.go:141] libmachine: (multinode-397129) DBG | domain multinode-397129 has defined IP address 192.168.39.117 and MAC address 52:54:00:e4:fe:4b in network mk-multinode-397129
	I0224 12:47:46.460454  921974 main.go:141] libmachine: (multinode-397129) Calling .GetSSHPort
	I0224 12:47:46.460664  921974 main.go:141] libmachine: (multinode-397129) Calling .GetSSHKeyPath
	I0224 12:47:46.460800  921974 main.go:141] libmachine: (multinode-397129) Calling .GetSSHUsername
	I0224 12:47:46.460919  921974 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/multinode-397129/id_rsa Username:docker}
	I0224 12:47:46.541026  921974 ssh_runner.go:195] Run: systemctl --version
	I0224 12:47:46.548922  921974 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 12:47:46.565057  921974 kubeconfig.go:125] found "multinode-397129" server: "https://192.168.39.117:8443"
	I0224 12:47:46.565106  921974 api_server.go:166] Checking apiserver status ...
	I0224 12:47:46.565151  921974 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 12:47:46.580891  921974 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1108/cgroup
	W0224 12:47:46.591752  921974 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1108/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0224 12:47:46.591810  921974 ssh_runner.go:195] Run: ls
	I0224 12:47:46.596996  921974 api_server.go:253] Checking apiserver healthz at https://192.168.39.117:8443/healthz ...
	I0224 12:47:46.601440  921974 api_server.go:279] https://192.168.39.117:8443/healthz returned 200:
	ok
	I0224 12:47:46.601473  921974 status.go:463] multinode-397129 apiserver status = Running (err=<nil>)
	I0224 12:47:46.601483  921974 status.go:176] multinode-397129 status: &{Name:multinode-397129 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0224 12:47:46.601502  921974 status.go:174] checking status of multinode-397129-m02 ...
	I0224 12:47:46.601794  921974 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:47:46.601822  921974 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:47:46.617923  921974 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43867
	I0224 12:47:46.618417  921974 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:47:46.618966  921974 main.go:141] libmachine: Using API Version  1
	I0224 12:47:46.618986  921974 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:47:46.619335  921974 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:47:46.619546  921974 main.go:141] libmachine: (multinode-397129-m02) Calling .GetState
	I0224 12:47:46.621247  921974 status.go:371] multinode-397129-m02 host status = "Running" (err=<nil>)
	I0224 12:47:46.621267  921974 host.go:66] Checking if "multinode-397129-m02" exists ...
	I0224 12:47:46.621655  921974 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:47:46.621702  921974 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:47:46.637582  921974 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36547
	I0224 12:47:46.638109  921974 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:47:46.638656  921974 main.go:141] libmachine: Using API Version  1
	I0224 12:47:46.638687  921974 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:47:46.639074  921974 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:47:46.639302  921974 main.go:141] libmachine: (multinode-397129-m02) Calling .GetIP
	I0224 12:47:46.642136  921974 main.go:141] libmachine: (multinode-397129-m02) DBG | domain multinode-397129-m02 has defined MAC address 52:54:00:b1:38:a5 in network mk-multinode-397129
	I0224 12:47:46.642500  921974 main.go:141] libmachine: (multinode-397129-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:38:a5", ip: ""} in network mk-multinode-397129: {Iface:virbr1 ExpiryTime:2025-02-24 13:45:59 +0000 UTC Type:0 Mac:52:54:00:b1:38:a5 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:multinode-397129-m02 Clientid:01:52:54:00:b1:38:a5}
	I0224 12:47:46.642552  921974 main.go:141] libmachine: (multinode-397129-m02) DBG | domain multinode-397129-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:b1:38:a5 in network mk-multinode-397129
	I0224 12:47:46.642638  921974 host.go:66] Checking if "multinode-397129-m02" exists ...
	I0224 12:47:46.643086  921974 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:47:46.643134  921974 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:47:46.658868  921974 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33353
	I0224 12:47:46.659390  921974 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:47:46.659936  921974 main.go:141] libmachine: Using API Version  1
	I0224 12:47:46.659960  921974 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:47:46.660288  921974 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:47:46.660483  921974 main.go:141] libmachine: (multinode-397129-m02) Calling .DriverName
	I0224 12:47:46.660702  921974 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0224 12:47:46.660730  921974 main.go:141] libmachine: (multinode-397129-m02) Calling .GetSSHHostname
	I0224 12:47:46.663535  921974 main.go:141] libmachine: (multinode-397129-m02) DBG | domain multinode-397129-m02 has defined MAC address 52:54:00:b1:38:a5 in network mk-multinode-397129
	I0224 12:47:46.664023  921974 main.go:141] libmachine: (multinode-397129-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:38:a5", ip: ""} in network mk-multinode-397129: {Iface:virbr1 ExpiryTime:2025-02-24 13:45:59 +0000 UTC Type:0 Mac:52:54:00:b1:38:a5 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:multinode-397129-m02 Clientid:01:52:54:00:b1:38:a5}
	I0224 12:47:46.664057  921974 main.go:141] libmachine: (multinode-397129-m02) DBG | domain multinode-397129-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:b1:38:a5 in network mk-multinode-397129
	I0224 12:47:46.664247  921974 main.go:141] libmachine: (multinode-397129-m02) Calling .GetSSHPort
	I0224 12:47:46.664440  921974 main.go:141] libmachine: (multinode-397129-m02) Calling .GetSSHKeyPath
	I0224 12:47:46.664609  921974 main.go:141] libmachine: (multinode-397129-m02) Calling .GetSSHUsername
	I0224 12:47:46.664775  921974 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20451-887294/.minikube/machines/multinode-397129-m02/id_rsa Username:docker}
	I0224 12:47:46.745611  921974 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 12:47:46.761611  921974 status.go:176] multinode-397129-m02 status: &{Name:multinode-397129-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0224 12:47:46.761654  921974 status.go:174] checking status of multinode-397129-m03 ...
	I0224 12:47:46.762021  921974 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:47:46.762054  921974 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:47:46.778628  921974 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34663
	I0224 12:47:46.779170  921974 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:47:46.779721  921974 main.go:141] libmachine: Using API Version  1
	I0224 12:47:46.779744  921974 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:47:46.780117  921974 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:47:46.780360  921974 main.go:141] libmachine: (multinode-397129-m03) Calling .GetState
	I0224 12:47:46.782199  921974 status.go:371] multinode-397129-m03 host status = "Stopped" (err=<nil>)
	I0224 12:47:46.782224  921974 status.go:384] host is not running, skipping remaining checks
	I0224 12:47:46.782230  921974 status.go:176] multinode-397129-m03 status: &{Name:multinode-397129-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.45s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (44.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397129 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-397129 node start m03 -v=7 --alsologtostderr: (43.883589805s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397129 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (44.54s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (346.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-397129
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-397129
E0224 12:49:12.773911  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-397129: (3m3.416144581s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-397129 --wait=true -v=8 --alsologtostderr
E0224 12:51:46.850418  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/functional-892991/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:54:12.769379  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-397129 --wait=true -v=8 --alsologtostderr: (2m43.190816228s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-397129
--- PASS: TestMultiNode/serial/RestartKeepsNodes (346.71s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397129 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-397129 node delete m03: (2.272013869s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397129 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.85s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (182.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397129 stop
E0224 12:54:49.920708  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/functional-892991/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:56:46.852489  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/functional-892991/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-397129 stop: (3m1.921893081s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397129 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-397129 status: exit status 7 (93.411722ms)

                                                
                                                
-- stdout --
	multinode-397129
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-397129-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397129 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-397129 status --alsologtostderr: exit status 7 (97.421703ms)

                                                
                                                
-- stdout --
	multinode-397129
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-397129-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0224 12:57:22.948944  925015 out.go:345] Setting OutFile to fd 1 ...
	I0224 12:57:22.949061  925015 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 12:57:22.949074  925015 out.go:358] Setting ErrFile to fd 2...
	I0224 12:57:22.949080  925015 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 12:57:22.949294  925015 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-887294/.minikube/bin
	I0224 12:57:22.949543  925015 out.go:352] Setting JSON to false
	I0224 12:57:22.949581  925015 mustload.go:65] Loading cluster: multinode-397129
	I0224 12:57:22.949714  925015 notify.go:220] Checking for updates...
	I0224 12:57:22.950091  925015 config.go:182] Loaded profile config "multinode-397129": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0224 12:57:22.950115  925015 status.go:174] checking status of multinode-397129 ...
	I0224 12:57:22.950621  925015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:57:22.950682  925015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:57:22.973465  925015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41255
	I0224 12:57:22.974042  925015 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:57:22.974842  925015 main.go:141] libmachine: Using API Version  1
	I0224 12:57:22.974883  925015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:57:22.975274  925015 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:57:22.975574  925015 main.go:141] libmachine: (multinode-397129) Calling .GetState
	I0224 12:57:22.977471  925015 status.go:371] multinode-397129 host status = "Stopped" (err=<nil>)
	I0224 12:57:22.977488  925015 status.go:384] host is not running, skipping remaining checks
	I0224 12:57:22.977494  925015 status.go:176] multinode-397129 status: &{Name:multinode-397129 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0224 12:57:22.977535  925015 status.go:174] checking status of multinode-397129-m02 ...
	I0224 12:57:22.977850  925015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0224 12:57:22.977886  925015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 12:57:22.993391  925015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40563
	I0224 12:57:22.993822  925015 main.go:141] libmachine: () Calling .GetVersion
	I0224 12:57:22.994352  925015 main.go:141] libmachine: Using API Version  1
	I0224 12:57:22.994384  925015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 12:57:22.994719  925015 main.go:141] libmachine: () Calling .GetMachineName
	I0224 12:57:22.994935  925015 main.go:141] libmachine: (multinode-397129-m02) Calling .GetState
	I0224 12:57:22.996491  925015 status.go:371] multinode-397129-m02 host status = "Stopped" (err=<nil>)
	I0224 12:57:22.996509  925015 status.go:384] host is not running, skipping remaining checks
	I0224 12:57:22.996517  925015 status.go:176] multinode-397129-m02 status: &{Name:multinode-397129-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (182.11s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (115.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-397129 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0224 12:58:55.856834  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:59:12.769731  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-397129 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m54.818479029s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397129 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (115.37s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (48.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-397129
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-397129-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-397129-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (75.820745ms)

                                                
                                                
-- stdout --
	* [multinode-397129-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20451
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20451-887294/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-887294/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-397129-m02' is duplicated with machine name 'multinode-397129-m02' in profile 'multinode-397129'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-397129-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-397129-m03 --driver=kvm2  --container-runtime=crio: (47.37114418s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-397129
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-397129: exit status 80 (238.372889ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-397129 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-397129-m03 already exists in multinode-397129-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-397129-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-397129-m03: (1.009230826s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (48.75s)

                                                
                                    
x
+
TestScheduledStopUnix (117s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-354151 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-354151 --memory=2048 --driver=kvm2  --container-runtime=crio: (45.274242438s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-354151 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-354151 -n scheduled-stop-354151
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-354151 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0224 13:05:52.681287  894564 retry.go:31] will retry after 123.271µs: open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/scheduled-stop-354151/pid: no such file or directory
I0224 13:05:52.682498  894564 retry.go:31] will retry after 157.436µs: open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/scheduled-stop-354151/pid: no such file or directory
I0224 13:05:52.683668  894564 retry.go:31] will retry after 265.141µs: open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/scheduled-stop-354151/pid: no such file or directory
I0224 13:05:52.684796  894564 retry.go:31] will retry after 231.5µs: open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/scheduled-stop-354151/pid: no such file or directory
I0224 13:05:52.685940  894564 retry.go:31] will retry after 377.62µs: open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/scheduled-stop-354151/pid: no such file or directory
I0224 13:05:52.687057  894564 retry.go:31] will retry after 442.871µs: open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/scheduled-stop-354151/pid: no such file or directory
I0224 13:05:52.688157  894564 retry.go:31] will retry after 815.697µs: open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/scheduled-stop-354151/pid: no such file or directory
I0224 13:05:52.689278  894564 retry.go:31] will retry after 2.44113ms: open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/scheduled-stop-354151/pid: no such file or directory
I0224 13:05:52.692530  894564 retry.go:31] will retry after 2.50315ms: open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/scheduled-stop-354151/pid: no such file or directory
I0224 13:05:52.695761  894564 retry.go:31] will retry after 2.114781ms: open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/scheduled-stop-354151/pid: no such file or directory
I0224 13:05:52.698967  894564 retry.go:31] will retry after 4.145519ms: open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/scheduled-stop-354151/pid: no such file or directory
I0224 13:05:52.704244  894564 retry.go:31] will retry after 10.149584ms: open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/scheduled-stop-354151/pid: no such file or directory
I0224 13:05:52.715522  894564 retry.go:31] will retry after 9.725277ms: open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/scheduled-stop-354151/pid: no such file or directory
I0224 13:05:52.725797  894564 retry.go:31] will retry after 19.60456ms: open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/scheduled-stop-354151/pid: no such file or directory
I0224 13:05:52.746049  894564 retry.go:31] will retry after 31.554537ms: open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/scheduled-stop-354151/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-354151 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-354151 -n scheduled-stop-354151
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-354151
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-354151 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0224 13:06:46.852277  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/functional-892991/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-354151
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-354151: exit status 7 (74.373976ms)

                                                
                                                
-- stdout --
	scheduled-stop-354151
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-354151 -n scheduled-stop-354151
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-354151 -n scheduled-stop-354151: exit status 7 (73.82252ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-354151" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-354151
--- PASS: TestScheduledStopUnix (117.00s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (239.98s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.599804324 start -p running-upgrade-271664 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.599804324 start -p running-upgrade-271664 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m14.03342222s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-271664 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-271664 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m42.014226753s)
helpers_test.go:175: Cleaning up "running-upgrade-271664" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-271664
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-271664: (1.241113634s)
--- PASS: TestRunningBinaryUpgrade (239.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-248837 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-248837 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (92.995039ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-248837] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20451
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20451-887294/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-887294/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (104.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-248837 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-248837 --driver=kvm2  --container-runtime=crio: (1m44.350781923s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-248837 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (104.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-799329 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-799329 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (112.298773ms)

                                                
                                                
-- stdout --
	* [false-799329] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20451
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20451-887294/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-887294/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0224 13:07:55.900418  930434 out.go:345] Setting OutFile to fd 1 ...
	I0224 13:07:55.900685  930434 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 13:07:55.900699  930434 out.go:358] Setting ErrFile to fd 2...
	I0224 13:07:55.900706  930434 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 13:07:55.900905  930434 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-887294/.minikube/bin
	I0224 13:07:55.901745  930434 out.go:352] Setting JSON to false
	I0224 13:07:55.902832  930434 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10217,"bootTime":1740392259,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0224 13:07:55.902948  930434 start.go:139] virtualization: kvm guest
	I0224 13:07:55.905287  930434 out.go:177] * [false-799329] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0224 13:07:55.906867  930434 out.go:177]   - MINIKUBE_LOCATION=20451
	I0224 13:07:55.906877  930434 notify.go:220] Checking for updates...
	I0224 13:07:55.910150  930434 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 13:07:55.911849  930434 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20451-887294/kubeconfig
	I0224 13:07:55.913462  930434 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-887294/.minikube
	I0224 13:07:55.914813  930434 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0224 13:07:55.916335  930434 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0224 13:07:55.918261  930434 config.go:182] Loaded profile config "NoKubernetes-248837": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0224 13:07:55.918370  930434 config.go:182] Loaded profile config "offline-crio-226975": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0224 13:07:55.918453  930434 config.go:182] Loaded profile config "running-upgrade-271664": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0224 13:07:55.918538  930434 driver.go:394] Setting default libvirt URI to qemu:///system
	I0224 13:07:55.955090  930434 out.go:177] * Using the kvm2 driver based on user configuration
	I0224 13:07:55.956385  930434 start.go:297] selected driver: kvm2
	I0224 13:07:55.956406  930434 start.go:901] validating driver "kvm2" against <nil>
	I0224 13:07:55.956421  930434 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0224 13:07:55.958272  930434 out.go:201] 
	W0224 13:07:55.959753  930434 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0224 13:07:55.961103  930434 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-799329 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-799329

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-799329

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-799329

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-799329

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-799329

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-799329

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-799329

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-799329

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-799329

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-799329

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-799329"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-799329"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-799329"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-799329

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-799329"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-799329"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-799329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-799329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-799329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-799329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-799329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-799329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-799329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-799329" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-799329"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-799329"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-799329"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-799329"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-799329"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-799329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-799329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-799329" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-799329"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-799329"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-799329"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-799329"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-799329"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-799329

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-799329"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-799329"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-799329"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-799329"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-799329"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-799329"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-799329"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-799329"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-799329"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-799329"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-799329"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-799329"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-799329"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-799329"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-799329"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-799329"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-799329"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-799329"

                                                
                                                
----------------------- debugLogs end: false-799329 [took: 2.949452924s] --------------------------------
helpers_test.go:175: Cleaning up "false-799329" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-799329
--- PASS: TestNetworkPlugins/group/false (3.22s)

                                                
                                    
x
+
TestPause/serial/Start (105.63s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-290993 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-290993 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m45.631683952s)
--- PASS: TestPause/serial/Start (105.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (68.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-248837 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0224 13:09:12.769840  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-248837 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m7.81016942s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-248837 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-248837 status -o json: exit status 2 (265.110992ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-248837","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-248837
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (68.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (28.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-248837 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-248837 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.819318454s)
--- PASS: TestNoKubernetes/serial/Start (28.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-248837 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-248837 "sudo systemctl is-active --quiet service kubelet": exit status 1 (216.164113ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (25.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (14.452981564s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (11.044340517s)
--- PASS: TestNoKubernetes/serial/ProfileList (25.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-248837
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-248837: (1.307621857s)
--- PASS: TestNoKubernetes/serial/Stop (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (22.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-248837 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-248837 --driver=kvm2  --container-runtime=crio: (22.628751442s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (22.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-248837 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-248837 "sudo systemctl is-active --quiet service kubelet": exit status 1 (229.321366ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.72s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (128.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2437140950 start -p stopped-upgrade-988365 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2437140950 start -p stopped-upgrade-988365 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m21.423449151s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2437140950 -p stopped-upgrade-988365 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2437140950 -p stopped-upgrade-988365 stop: (2.153211268s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-988365 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-988365 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (45.253069054s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (128.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (60.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-799329 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-799329 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m0.173760158s)
--- PASS: TestNetworkPlugins/group/auto/Start (60.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-799329 "pgrep -a kubelet"
I0224 13:13:54.106230  894564 config.go:182] Loaded profile config "auto-799329": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-799329 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-4c57p" [8eaddfaa-ab98-42b4-9c6c-33d66a70cb1f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-4c57p" [8eaddfaa-ab98-42b4-9c6c-33d66a70cb1f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004111896s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-799329 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-799329 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-799329 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-988365
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-988365: (1.179827411s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (70.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-799329 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-799329 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m10.263867739s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (70.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (98.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-799329 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-799329 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m38.960155451s)
--- PASS: TestNetworkPlugins/group/calico/Start (98.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-9tvpr" [8ec0704c-c773-4b56-ab1e-bca6dae97f46] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.0099166s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-799329 "pgrep -a kubelet"
I0224 13:15:29.992746  894564 config.go:182] Loaded profile config "kindnet-799329": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-799329 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-95d5n" [0d1d976e-ca35-40e1-a299-b8529a4b6eb1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-95d5n" [0d1d976e-ca35-40e1-a299-b8529a4b6eb1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004488674s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (74.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-799329 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E0224 13:15:35.858925  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-799329 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m14.333755406s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (74.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-799329 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-799329 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-799329 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (93.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-799329 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-799329 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m33.676574518s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (93.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (104.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-799329 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-799329 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m44.229704015s)
--- PASS: TestNetworkPlugins/group/flannel/Start (104.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-n4cng" [75f74afd-81fc-4e75-a0bf-4135a067f09d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.007908314s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-799329 "pgrep -a kubelet"
I0224 13:16:10.318885  894564 config.go:182] Loaded profile config "calico-799329": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-799329 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-dssg4" [d8a51bc4-de46-439d-adf4-3e25fefb5032] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-dssg4" [d8a51bc4-de46-439d-adf4-3e25fefb5032] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.00465805s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-799329 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-799329 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-799329 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (105.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-799329 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-799329 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m45.139150583s)
--- PASS: TestNetworkPlugins/group/bridge/Start (105.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-799329 "pgrep -a kubelet"
I0224 13:16:46.794772  894564 config.go:182] Loaded profile config "custom-flannel-799329": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-799329 replace --force -f testdata/netcat-deployment.yaml
E0224 13:16:46.849421  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/functional-892991/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-c62bv" [cf2c81a0-d74c-421d-95de-3c9a626b18a7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-c62bv" [cf2c81a0-d74c-421d-95de-3c9a626b18a7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.012392058s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-799329 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-799329 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-799329 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-799329 "pgrep -a kubelet"
I0224 13:17:30.570096  894564 config.go:182] Loaded profile config "enable-default-cni-799329": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-799329 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-8qjr5" [94989d70-8977-48d4-ac17-98982c2b62a4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-8qjr5" [94989d70-8977-48d4-ac17-98982c2b62a4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 14.004915208s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-2mzmg" [5438f4e7-7ac9-4826-b616-cac4d1223bbf] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004265364s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-799329 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-799329 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-799329 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-799329 "pgrep -a kubelet"
I0224 13:17:50.071236  894564 config.go:182] Loaded profile config "flannel-799329": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-799329 replace --force -f testdata/netcat-deployment.yaml
I0224 13:17:50.373583  894564 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-dm9j9" [1dd3a9f6-3b4e-4a99-90b1-5110af8a0f3b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-dm9j9" [1dd3a9f6-3b4e-4a99-90b1-5110af8a0f3b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.13175027s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-799329 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-799329 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-799329 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (105.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-956442 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-956442 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (1m45.939873695s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (105.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (100.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-037381 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-037381 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (1m40.262864321s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (100.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-799329 "pgrep -a kubelet"
I0224 13:18:27.967047  894564 config.go:182] Loaded profile config "bridge-799329": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-799329 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-6qn7g" [51086719-ef8a-47b7-a02d-257ec97b6a96] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-6qn7g" [51086719-ef8a-47b7-a02d-257ec97b6a96] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003892152s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-799329 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-799329 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-799329 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (61.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-108648 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
E0224 13:18:57.380631  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/auto-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:18:59.942439  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/auto-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:19:05.063981  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/auto-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:19:12.769449  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:19:15.305896  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/auto-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:19:35.788021  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/auto-799329/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-108648 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (1m1.265898273s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (61.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-956442 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [de7b52a7-29ee-4d50-b971-562a2d80ccb5] Pending
helpers_test.go:344: "busybox" [de7b52a7-29ee-4d50-b971-562a2d80ccb5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [de7b52a7-29ee-4d50-b971-562a2d80ccb5] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.004726181s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-956442 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-108648 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [914a852e-b7e9-4615-a177-a12b70022cec] Pending
helpers_test.go:344: "busybox" [914a852e-b7e9-4615-a177-a12b70022cec] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [914a852e-b7e9-4615-a177-a12b70022cec] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.003844301s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-108648 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (12.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-037381 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [23bb4c32-f28e-4b3a-8dcc-80e60ea09ad6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [23bb4c32-f28e-4b3a-8dcc-80e60ea09ad6] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 12.003283328s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-037381 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (12.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-956442 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-956442 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.040765261s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-956442 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (90.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-956442 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-956442 --alsologtostderr -v=3: (1m30.875473909s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (90.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-108648 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-108648 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-108648 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-108648 --alsologtostderr -v=3: (1m31.034954163s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-037381 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-037381 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-037381 --alsologtostderr -v=3
E0224 13:20:16.750413  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/auto-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:20:23.723538  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kindnet-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:20:23.730004  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kindnet-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:20:23.741488  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kindnet-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:20:23.763064  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kindnet-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:20:23.804551  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kindnet-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:20:23.886456  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kindnet-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:20:24.048110  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kindnet-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:20:24.369673  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kindnet-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:20:25.011977  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kindnet-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:20:26.293566  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kindnet-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:20:28.854997  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kindnet-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:20:33.977365  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kindnet-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:20:44.218891  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kindnet-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:21:04.068585  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/calico-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:21:04.074998  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/calico-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:21:04.086429  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/calico-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:21:04.107937  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/calico-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:21:04.149443  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/calico-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:21:04.231012  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/calico-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:21:04.392546  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/calico-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:21:04.700204  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kindnet-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:21:04.714723  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/calico-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:21:05.356613  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/calico-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:21:06.638649  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/calico-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:21:09.200573  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/calico-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:21:14.322297  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/calico-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:21:24.564561  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/calico-799329/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-037381 --alsologtostderr -v=3: (1m31.039945133s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-956442 -n no-preload-956442
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-956442 -n no-preload-956442: exit status 7 (77.650438ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-956442 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (350.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-956442 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
E0224 13:21:38.672501  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/auto-799329/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-956442 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (5m50.472758446s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-956442 -n no-preload-956442
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (350.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-108648 -n default-k8s-diff-port-108648
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-108648 -n default-k8s-diff-port-108648: exit status 7 (77.035919ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-108648 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (353.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-108648 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
E0224 13:21:45.046341  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/calico-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:21:45.662017  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/kindnet-799329/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-108648 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (5m53.600242986s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-108648 -n default-k8s-diff-port-108648
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (353.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-037381 -n embed-certs-037381
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-037381 -n embed-certs-037381: exit status 7 (79.746959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-037381 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (335.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-037381 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
E0224 13:21:46.849239  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/functional-892991/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:21:47.050011  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/custom-flannel-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:21:47.056491  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/custom-flannel-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:21:47.067939  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/custom-flannel-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:21:47.089389  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/custom-flannel-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:21:47.130851  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/custom-flannel-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:21:47.212381  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/custom-flannel-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:21:47.373994  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/custom-flannel-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:21:47.695763  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/custom-flannel-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:21:48.337924  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/custom-flannel-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:21:49.619578  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/custom-flannel-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:21:52.181874  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/custom-flannel-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:21:57.303915  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/custom-flannel-799329/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-037381 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (5m35.575030469s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-037381 -n embed-certs-037381
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (335.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-233759 --alsologtostderr -v=3
E0224 13:23:52.887179  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/enable-default-cni-799329/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-233759 --alsologtostderr -v=3: (2.445878904s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-233759 -n old-k8s-version-233759
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-233759 -n old-k8s-version-233759: exit status 7 (79.342923ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-233759 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-dfj26" [482cef5a-8219-403e-8cd8-a2f7cb068d91] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004791038s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (17.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-dn2dn" [42e4cf1a-c6d3-4fe1-bf0f-d701daa26fe5] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-dn2dn" [42e4cf1a-c6d3-4fe1-bf0f-d701daa26fe5] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 17.00506663s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (17.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-dfj26" [482cef5a-8219-403e-8cd8-a2f7cb068d91] Running
E0224 13:27:30.949202  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/enable-default-cni-799329/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005662393s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-037381 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-037381 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-037381 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-037381 --alsologtostderr -v=1: (1.314608571s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-037381 -n embed-certs-037381
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-037381 -n embed-certs-037381: exit status 2 (312.506601ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-037381 -n embed-certs-037381
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-037381 -n embed-certs-037381: exit status 2 (291.651536ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-037381 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-037381 -n embed-certs-037381
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-037381 -n embed-certs-037381
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-wcrxj" [5a89c307-c9b0-4059-a88c-aa7f8fc50788] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-wcrxj" [5a89c307-c9b0-4059-a88c-aa7f8fc50788] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.003825814s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-651381 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-651381 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (47.912407924s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-dn2dn" [42e4cf1a-c6d3-4fe1-bf0f-d701daa26fe5] Running
E0224 13:27:43.833935  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/flannel-799329/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004498482s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-956442 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-956442 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-956442 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-956442 -n no-preload-956442
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-956442 -n no-preload-956442: exit status 2 (292.860267ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-956442 -n no-preload-956442
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-956442 -n no-preload-956442: exit status 2 (301.010406ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-956442 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-956442 -n no-preload-956442
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-956442 -n no-preload-956442
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-wcrxj" [5a89c307-c9b0-4059-a88c-aa7f8fc50788] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004436527s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-108648 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-108648 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-108648 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-108648 -n default-k8s-diff-port-108648
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-108648 -n default-k8s-diff-port-108648: exit status 2 (260.759433ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-108648 -n default-k8s-diff-port-108648
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-108648 -n default-k8s-diff-port-108648: exit status 2 (264.229348ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-108648 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-108648 -n default-k8s-diff-port-108648
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-108648 -n default-k8s-diff-port-108648
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-651381 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-651381 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.197996539s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-651381 --alsologtostderr -v=3
E0224 13:28:28.219056  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/bridge-799329/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-651381 --alsologtostderr -v=3: (11.361706626s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-651381 -n newest-cni-651381
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-651381 -n newest-cni-651381: exit status 7 (73.721447ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-651381 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (37.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-651381 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
E0224 13:28:54.808787  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/auto-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:28:55.922248  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/bridge-799329/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:29:12.769427  894564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-887294/.minikube/profiles/addons-641952/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-651381 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (36.878298064s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-651381 -n newest-cni-651381
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (37.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-651381 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-651381 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-651381 -n newest-cni-651381
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-651381 -n newest-cni-651381: exit status 2 (262.967634ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-651381 -n newest-cni-651381
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-651381 -n newest-cni-651381: exit status 2 (263.953125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-651381 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-651381 -n newest-cni-651381
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-651381 -n newest-cni-651381
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.61s)

                                                
                                    

Test skip (40/321)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.32.2/cached-images 0
15 TestDownloadOnly/v1.32.2/binaries 0
16 TestDownloadOnly/v1.32.2/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.3
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
127 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
128 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
129 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
130 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestFunctionalNewestKubernetes 0
157 TestGvisorAddon 0
179 TestImageBuild 0
206 TestKicCustomNetwork 0
207 TestKicExistingNetwork 0
208 TestKicCustomSubnet 0
209 TestKicStaticIP 0
241 TestChangeNoneUser 0
244 TestScheduledStopWindows 0
246 TestSkaffold 0
248 TestInsufficientStorage 0
252 TestMissingContainerUpgrade 0
258 TestNetworkPlugins/group/kubenet 3.03
266 TestNetworkPlugins/group/cilium 4.25
281 TestStartStop/group/disable-driver-mounts 0.17
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.3s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-641952 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.30s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:480: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-799329 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-799329

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-799329

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-799329

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-799329

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-799329

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-799329

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-799329

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-799329

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-799329

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-799329

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-799329"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-799329"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-799329"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-799329

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-799329"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-799329"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-799329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-799329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-799329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-799329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-799329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-799329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-799329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-799329" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-799329"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-799329"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-799329"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-799329"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-799329"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-799329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-799329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-799329" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-799329"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-799329"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-799329"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-799329"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-799329"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-799329

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-799329"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-799329"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-799329"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-799329"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-799329"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-799329"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-799329"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-799329"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-799329"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-799329"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-799329"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-799329"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-799329"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-799329"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-799329"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-799329"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-799329"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-799329"

                                                
                                                
----------------------- debugLogs end: kubenet-799329 [took: 2.881904018s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-799329" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-799329
--- SKIP: TestNetworkPlugins/group/kubenet (3.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-799329 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-799329

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-799329

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-799329

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-799329

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-799329

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-799329

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-799329

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-799329

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-799329

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-799329

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-799329"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-799329"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-799329"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-799329

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-799329"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-799329"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-799329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-799329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-799329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-799329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-799329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-799329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-799329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-799329" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-799329"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-799329"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-799329"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-799329"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-799329"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-799329

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-799329

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-799329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-799329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-799329

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-799329

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-799329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-799329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-799329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-799329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-799329" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-799329"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-799329"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-799329"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-799329"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-799329"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-799329

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-799329"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-799329"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-799329"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-799329"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-799329"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-799329"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-799329"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-799329"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-799329"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-799329"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-799329"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-799329"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-799329"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-799329"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-799329"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-799329"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-799329"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-799329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-799329"

                                                
                                                
----------------------- debugLogs end: cilium-799329 [took: 4.065604619s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-799329" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-799329
--- SKIP: TestNetworkPlugins/group/cilium (4.25s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-721691" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-721691
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard